home *** CD-ROM | disk | FTP | other *** search
Text File | 1995-04-20 | 155.6 KB | 3,436 lines |
- Archive-name: neural-net-faq
- Last-modified: 1995/03/23
- URL: http://wwwipd.ira.uka.de/~prechelt/FAQ/neural-net-faq.html
- Maintainer: prechelt@ira.uka.de (Lutz Prechelt)
-
-
- ------------------------------------------------------------------------
- Additions, corrections, or improvements are always welcome.
- Anybody who is willing to contribute any information,
- please email me; if it is relevant, I will incorporate it.
-
- The monthly posting departs at the 28th of every month.
- ------------------------------------------------------------------------
-
-
- This is a monthly posting to the Usenet newsgroup comp.ai.neural-nets
- (and comp.answers, where it should be findable at ANY time). Its
- purpose is to provide basic information for individuals who are new to the
- field of neural networks or are just beginning to read this group. It shall
- help to avoid lengthy discussion of questions that usually arise for
- beginners of one or the other kind.
-
- SO, PLEASE, SEARCH THIS POSTING FIRST IF YOU HAVE A QUESTION
- and
- DON'T POST ANSWERS TO FAQs: POINT THE ASKER TO THIS POSTING
-
- This posting is archived in the periodic posting archive on host
- rtfm.mit.edu (and on some other hosts as well). Look in the anonymous
- ftp directory "/pub/usenet/news.answers", the filename is as given in the
- 'Archive-name:' header above. If you do not have anonymous ftp access,
- you can access the archives by mail server as well. Send an E-mail
- message to mail-server@rtfm.mit.edu with "help" and "index" in the
- body on separate lines for more information.
-
- For those of you who read this posting anywhere other than in
- comp.ai.neural-nets: To read comp.ai.neural-nets (or post articles to it)
- you need Usenet News access. Try the commands, 'xrn', 'rn', 'nn', or 'trn'
- on your Unix machine, 'news' on your VMS machine, or ask a local
- guru.
-
- This monthly posting is also available as a hypertext document in WWW
- (World Wide Web) under the URL
- "http://wwwipd.ira.uka.de/~prechelt/FAQ/neural-net-faq.html"
-
- The monthly posting is not meant to discuss any topic exhaustively.
-
- Disclaimer:
- This posting is provided 'as is'.
- No warranty whatsoever is expressed or implied,
- in particular, no warranty that the information contained herein
- is correct or useful in any way, although both is intended.
-
- To find the answer of question number 'x', search for the string
- "x. A:" (so the answer to question 12 is at 12. A: )
-
-
- And now, in the end, we begin:
-
- ========== Questions ==========
- ********************************
-
- 1. What is this newsgroup for? How shall it be used?
- 2. What is a neural network (NN)?
- 3. What can you do with a Neural Network and what not?
- 4. Who is concerned with Neural Networks?
-
- 5. What does 'backprop' mean? What is 'overfitting'?
- 6. Why use a bias input? Why activation functions?
- 7. How many hidden units should I use?
- 8. How many learning methods for NNs exist? Which?
- 9. What about Genetic Algorithms?
- 10. What about Fuzzy Logic?
- 11. How are NNs related to statistical methods?
-
- 12. Good introductory literature about Neural Networks?
- 13. Any journals and magazines about Neural Networks?
- 14. The most important conferences concerned with Neural
- Networks?
- 15. Neural Network Associations?
- 16. Other sources of information about NNs?
-
- 17. Freely available software packages for NN simulation?
- 18. Commercial software packages for NN simulation?
- 19. Neural Network hardware?
-
- 20. Databases for experimentation with NNs?
-
- ========== Answers ==========
- ******************************
-
- 1. A: What is this newsgroup for? How shall it be
- =================================================
- used?
- =====
-
- The newsgroup comp.ai.neural-nets is inteded as a forum for
- people who want to use or explore the capabilities of Artificial
- Neural Networks or Neural-Network-like structures.
-
- There should be the following types of articles in this newsgroup:
-
- 1. Requests
- +++++++++++
-
- Requests are articles of the form "I am looking for
- X" where X is something public like a book, an article, a
- piece of software. The most important about such a request
- is to be as specific as possible!
-
- If multiple different answers can be expected, the person
- making the request should prepare to make a summary of
- the answers he/she got and announce to do so with a
- phrase like "Please reply by email, I'll
- summarize to the group" at the end of the posting.
-
- The Subject line of the posting should then be something
- like "Request: X"
-
- 2. Questions
- ++++++++++++
-
- As opposed to requests, questions ask for a larger piece of
- information or a more or less detailed explanation of
- something. To avoid lots of redundant traffic it is important
- that the poster provides with the question all information
- s/he already has about the subject asked and state the
- actual question as precise and narrow as possible. The
- poster should prepare to make a summary of the answers
- s/he got and announce to do so with a phrase like
- "Please reply by email, I'll summarize to
- the group" at the end of the posting.
-
- The Subject line of the posting should be something like
- "Question: this-and-that" or have the form of a
- question (i.e., end with a question mark)
-
- 3. Answers
- ++++++++++
-
- These are reactions to questions or requests. As a rule of
- thumb articles of type "answer" should be rare. Ideally, in
- most cases either the answer is too specific to be of general
- interest (and should thus be e-mailed to the poster) or a
- summary was announced with the question or request (and
- answers should thus be e-mailed to the poster).
-
- The subject lines of answers are automatically adjusted by
- the news software. Note that sometimes longer threads of
- discussion evolve from an answer to a question or request.
- In this case posters should change the subject line suitably
- as soon as the topic goes too far away from the one
- announced in the original subject line. You can still carry
- along the old subject in parentheses in the form
- "Subject: new subject (was: old subject)"
-
- 4. Summaries
- ++++++++++++
-
- In all cases of requests or questions the answers for which
- can be assumed to be of some general interest, the poster of
- the request or question shall summarize the answers he/she
- received. Such a summary should be announced in the
- original posting of the question or request with a phrase
- like "Please answer by email, I'll
- summarize"
-
- In such a case, people who answer to a question should
- NOT post their answer to the newsgroup but instead mail
- them to the poster of the question who collects and reviews
- them. After about 5 to 20 days after the original posting, its
- poster should make the summary of answers and post it to
- the newsgroup.
-
- Some care should be invested into a summary:
- o simple concatenation of all the answers is not
- enough: instead, redundancies, irrelevancies,
- verbosities, and errors should be filtered out (as good
- as possible)
- o the answers should be separated clearly
- o the contributors of the individual answers should be
- identifiable (unless they requested to remain
- anonymous [yes, that happens])
- o the summary should start with the "quintessence" of
- the answers, as seen by the original poster
- o A summary should, when posted, clearly be
- indicated to be one by giving it a Subject line
- starting with "SUMMARY:"
- Note that a good summary is pure gold for the rest of the
- newsgroup community, so summary work will be most
- appreciated by all of us. Good summaries are more valuable
- than any moderator ! :-)
-
- 5. Announcements
- ++++++++++++++++
-
- Some articles never need any public reaction. These are
- called announcements (for instance for a workshop,
- conference or the availability of some technical report or
- software system).
-
- Announcements should be clearly indicated to be such by
- giving them a subject line of the form "Announcement:
- this-and-that"
-
- 6. Reports
- ++++++++++
-
- Sometimes people spontaneously want to report something
- to the newsgroup. This might be special experiences with
- some software, results of own experiments or conceptual
- work, or especially interesting information from
- somewhere else.
-
- Reports should be clearly indicated to be such by giving
- them a subject line of the form "Report:
- this-and-that"
-
- 7. Discussions
- ++++++++++++++
-
- An especially valuable possibility of Usenet is of course
- that of discussing a certain topic with hundreds of potential
- participants. All traffic in the newsgroup that can not be
- subsumed under one of the above categories should belong
- to a discussion.
-
- If somebody explicitly wants to start a discussion, he/she
- can do so by giving the posting a subject line of the form
- "Subject: Discussion: this-and-that"
-
- It is quite difficult to keep a discussion from drifting into
- chaos, but, unfortunately, as many many other newsgroups
- show there seems to be no secure way to avoid this. On the
- other hand, comp.ai.neural-nets has not had many
- problems with this effect in the past, so let's just go and
- hope...
-
- ------------------------------------------------------------------------
-
- 2. A: What is a neural network (NN)?
- ====================================
-
- First of all, when we are talking about a neural network, we
- *should* usually better say "artificial neural network" (ANN),
- because that is what we mean most of the time. Biological neural
- networks are much more complicated in their elementary
- structures than the mathematical models we use for ANNs.
-
- A vague description is as follows:
-
- An ANN is a network of many very simple processors ("units"),
- each possibly having a (small amount of) local memory. The units
- are connected by unidirectional communication channels
- ("connections"), which carry numeric (as opposed to symbolic)
- data. The units operate only on their local data and on the inputs
- they receive via the connections.
-
- The design motivation is what distinguishes neural networks from
- other mathematical techniques:
-
- A neural network is a processing device, either an algorithm, or
- actual hardware, whose design was motivated by the design and
- functioning of human brains and components thereof.
-
- Most neural networks have some sort of "training" rule whereby
- the weights of connections are adjusted on the basis of presented
- patterns. In other words, neural networks "learn" from examples,
- just like children learn to recognize dogs from examples of dogs,
- and exhibit some structural capability for generalization.
-
- Neural networks normally have great potential for parallelism,
- since the computations of the components are independent of each
- other.
-
- ------------------------------------------------------------------------
-
- 3. A: What can you do with a Neural Network and
- ===============================================
- what not?
- =========
-
- In principle, NNs can compute any computable function, i.e. they
- can do everything a normal digital computer can do. Especially
- anything that can be represented as a mapping between vector
- spaces can be approximated to arbitrary precision by feedforward
- NNs (which is the most often used type).
-
- In practice, NNs are especially useful for mapping problems which
- are tolerant of some errors, have lots of example data available,
- but to which hard and fast rules can not easily be applied. NNs
- are, at least today, difficult to apply successfully to problems that
- concern manipulation of symbols and memory.
-
- ------------------------------------------------------------------------
-
- 4. A: Who is concerned with Neural Networks?
- ============================================
-
- Neural Networks are interesting for quite a lot of very dissimilar
- people:
- o Computer scientists want to find out about the properties
- of non-symbolic information processing with neural nets
- and about learning systems in general.
- o Engineers of many kinds want to exploit the capabilities of
- neural networks on many areas (e.g. signal processing) to
- solve their application problems.
- o Cognitive scientists view neural networks as a possible
- apparatus to describe models of thinking and conscience
- (High-level brain function).
- o Neuro-physiologists use neural networks to describe and
- explore medium-level brain function (e.g. memory, sensory
- system, motorics).
- o Physicists use neural networks to model phenomena in
- statistical mechanics and for a lot of other tasks.
- o Biologists use Neural Networks to interpret nucleotide
- sequences.
- o Philosophers and some other people may also be interested
- in Neural Networks for various reasons.
-
- ------------------------------------------------------------------------
-
- 5. A: What does 'backprop' mean? What is
- ========================================
- 'overfitting'?
- ===============
-
- 'Backprop' is an abbreviation for 'backpropagation of error' which
- is the most widely used learning method for neural networks
- today. Although it has many disadvantages, which could be
- summarized in the sentence "You are almost not knowing what
- you are actually doing when using backpropagation" :-) it has
- pretty much success on practical applications and is relatively easy
- to apply.
-
- It is for the training of layered (i.e., nodes are grouped in layers)
- feedforward (i.e., the arcs joining nodes are unidirectional, and
- there are no cycles) nets (often called "multi layer perceptrons").
-
- Back-propagation needs a teacher that knows the correct output
- for any input ("supervised learning") and uses gradient descent on
- the error (as provided by the teacher) to train the weights. The
- activation function is (usually) a sigmoidal (i.e., bounded above
- and below, but differentiable) function of a weighted sum of the
- nodes inputs.
-
- The use of a gradient descent algorithm to train its weights makes
- it slow to train; but being a feedforward algorithm, it is quite rapid
- during the recall phase.
-
- Literature:
- Rumelhart, D. E. and McClelland, J. L. (1986): Parallel
- Distributed Processing: Explorations in the Microstructure
- of Cognition (volume 1, pp 318-362). The MIT Press.
-
- (this is the classic one) or one of the dozens of other books or
- articles on backpropagation (see also answer "books").
-
- 'Overfitting' (often also called 'overtraining' or 'overlearning') is
- the phenomenon that in most cases a network gets worse instead
- of better after a certain point during training when it is trained to
- as low errors as possible. This is because such long training may
- make the network 'memorize' the training patterns, including all
- of their peculiarities. However, one is usually interested in the
- generalization of the network, i.e., the error it exhibits on examples
- NOT seen during training. Learning the peculiarities of the
- training set makes the generalization worse. The network should
- only learn the general structure of the examples.
-
- There are various methods to fight overfitting. The two most
- important classes of such methods are regularization methods
- (such as weight decay) and early stopping. Regularization
- methods try to limit the complexity of the network such that it is
- unable to learn peculiarities. Early stopping aims at stopping the
- training at the point of optimal generalization. A description of the
- early stopping method can for instance be found in section 3.3 of
- /pub/papers/techreports/1994-21.ps.Z on ftp.ira.uka.de
- (anonymous ftp).
-
- ------------------------------------------------------------------------
-
- 6. A: Why use a bias input? Why activation
- ==========================================
- functions?
- ===========
-
- One way of looking at the need for bias inputs is that the inputs to
- each unit in the net define an N-dimensional space, and the unit
- draws a hyperplane through that space, producing an "on" output
- on one side and an "off" output on the other. (With sigmoid units
- the plane will not be sharp -- there will be some gray area of
- intermediate values near the separating plane -- but ignore this
- for now.)
- The weights determine where this hyperplane is in the input space.
- Without a bias input, this separating plane is constrained to pass
- through the origin of the hyperspace defined by the inputs. For
- some problems that's OK, but in many problems the plane would
- be much more useful somewhere else. If you have many units in a
- layer, they share the same input space and without bias would
- ALL be constrained to pass through the origin.
-
- Activation functions are needed to introduce nonlinearity into the
- network. Without nonlinearity, hidden units would not make nets
- more powerful than just plain perceptrons (which do not have any
- hidden units, just input and output units). The reason is that a
- composition of linear functions is again a linear function.
- However, it is just the nonlinearity (i.e, the capability to represent
- nonlinear functions) that makes multilayer networks so powerful.
- Almost any nonlinear function does the job, although for
- backpropagation learning it must be differentiable and it helps if
- the function is bounded; the popular sigmoidal functions and
- gaussian functions are the most common choices.
-
- ------------------------------------------------------------------------
-
- 7. A: How many hidden units should I use?
- ==========================================
-
- There is no way to determine a good network topology just from
- the number of inputs and outputs. It depends critically on the
- number of training examples and the complexity of the
- classification you are trying to learn. There are problems with one
- input and one output that require millions of hidden units, and
- problems with a million inputs and a million outputs that require
- only one hidden unit, or none at all.
- Some books and articles offer "rules of thumb" for choosing a
- topopology -- Ninputs plus Noutputs dividied by two, maybe with
- a square root in there somewhere -- but such rules are total
- garbage. Other rules relate to the number of examples available:
- Use at most so many hidden units that the number of weights in
- the network times 10 is smaller than the number of examples.
- Such rules are only concerned with overfitting and are unreliable
- as well.
-
- ------------------------------------------------------------------------
-
- 8. A: How many learning methods for NNs exist?
- ==============================================
- Which?
- ======
-
- There are many many learning methods for NNs by now. Nobody
- knows exactly how many. New ones (at least variations of existing
- ones) are invented every week. Below is a collection of some of the
- most well known methods; not claiming to be complete.
-
- The main categorization of these methods is the distinction of
- supervised from unsupervised learning:
-
- In supervised learning, there is a "teacher" who in the learning
- phase "tells" the net how well it performs ("reinforcement
- learning") or what the correct behavior would have been ("fully
- supervised learning").
-
- In unsupervised learning the net is autonomous: it just looks at the
- data it is presented with, finds out about some of the properties of
- the data set and learns to reflect these properties in its output.
- What exactly these properties are, that the network can learn to
- recognise, depends on the particular network model and learning
- method.
-
- Many of these learning methods are closely connected with a
- certain (class of) network topology.
-
- Now here is the list, just giving some names:
-
- 1. UNSUPERVISED LEARNING (i.e. without a "teacher"):
- 1). Feedback Nets:
- a). Additive Grossberg (AG)
- b). Shunting Grossberg (SG)
- c). Binary Adaptive Resonance Theory (ART1)
- d). Analog Adaptive Resonance Theory (ART2, ART2a)
- e). Discrete Hopfield (DH)
- f). Continuous Hopfield (CH)
- g). Discrete Bidirectional Associative Memory (BAM)
- h). Temporal Associative Memory (TAM)
- i). Adaptive Bidirectional Associative Memory (ABAM)
- j). Kohonen Self-organizing Map/Topology-preserving map (SOM/TPM)
- k). Competitive learning
- 2). Feedforward-only Nets:
- a). Learning Matrix (LM)
- b). Driver-Reinforcement Learning (DR)
- c). Linear Associative Memory (LAM)
- d). Optimal Linear Associative Memory (OLAM)
- e). Sparse Distributed Associative Memory (SDM)
- f). Fuzzy Associative Memory (FAM)
- g). Counterprogation (CPN)
-
- 2. SUPERVISED LEARNING (i.e. with a "teacher"):
- 1). Feedback Nets:
- a). Brain-State-in-a-Box (BSB)
- b). Fuzzy Congitive Map (FCM)
- c). Boltzmann Machine (BM)
- d). Mean Field Annealing (MFT)
- e). Recurrent Cascade Correlation (RCC)
- f). Learning Vector Quantization (LVQ)
- g). Backpropagation through time (BPTT)
- h). Real-time recurrent learning (RTRL)
- i). Recurrent Extended Kalman Filter (EKF)
- 2). Feedforward-only Nets:
- a). Perceptron
- b). Adaline, Madaline
- c). Backpropagation (BP)
- d). Cauchy Machine (CM)
- e). Adaptive Heuristic Critic (AHC)
- f). Time Delay Neural Network (TDNN)
- g). Associative Reward Penalty (ARP)
- h). Avalanche Matched Filter (AMF)
- i). Backpercolation (Perc)
- j). Artmap
- k). Adaptive Logic Network (ALN)
- l). Cascade Correlation (CasCor)
- m). Extended Kalman Filter(EKF)
-
- ------------------------------------------------------------------------
-
- 9. A: What about Genetic Algorithms?
- ====================================
-
- There are a number of definitions of GA (Genetic Algorithm). A
- possible one is
-
- A GA is an optimization program
- that starts with
- a population of encoded procedures, (Creation of Life :-> )
- mutates them stochastically, (Get cancer or so :-> )
- and uses a selection process (Darwinism)
- to prefer the mutants with high fitness
- and perhaps a recombination process (Make babies :-> )
- to combine properties of (preferably) the succesful mutants.
-
- Genetic Algorithms are just a special case of the more general idea
- of ``evolutionary computation''. There is a newsgroup that is
- dedicated to the field of evolutionary computation called
- comp.ai.genetic. It has a detailed FAQ posting which, for instance,
- explains the terms "Genetic Algorithm", "Evolutionary
- Programming", "Evolution Strategy", "Classifier System", and
- "Genetic Programming". That FAQ also contains lots of pointers
- to relevant literature, software, other sources of information, et
- cetera et cetera. Please see the comp.ai.genetic FAQ for further
- information.
-
- ------------------------------------------------------------------------
-
- 10. A: What about Fuzzy Logic?
- ==============================
-
- Fuzzy Logic is an area of research based on the work of L.A.
- Zadeh. It is a departure from classical two-valued sets and logic,
- that uses "soft" linguistic (e.g. large, hot, tall) system variables and
- a continuous range of truth values in the interval [0,1], rather
- than strict binary (True or False) decisions and assignments.
-
- Fuzzy logic is used where a system is difficult to model exactly
- (but an inexact model is available), is controlled by a human
- operator or expert, or where ambiguity or vagueness is common. A
- typical fuzzy system consists of a rule base, membership functions,
- and an inference procedure.
-
- Most Fuzzy Logic discussion takes place in the newsgroup
- comp.ai.fuzzy, but there is also some work (and discussion) about
- combining fuzzy logic with Neural Network approaches in
- comp.ai.neural-nets.
-
- For more details see (for example):
-
- Klir, G.J. and Folger, T.A.: Fuzzy Sets, Uncertainty, and
- Information Prentice-Hall, Englewood Cliffs, N.J., 1988.
- Kosko, B.: Neural Networks and Fuzzy Systems Prentice Hall,
- Englewood Cliffs, NJ, 1992.
-
- ------------------------------------------------------------------------
-
- 11. A: How are NNs related to statistical methods?
- ===================================================
-
- There is considerable overlap between the fields of neural
- networks and statistics.
- Statistics is concerned with data analysis. In neural network
- terminology, statistical inference means learning to generalize
- from noisy data. Some neural networks are not concerned with
- data analysis (e.g., those intended to model biological systems) and
- therefore have little to do with statistics. Some neural networks do
- not learn (e.g., Hopfield nets) and therefore have little to do with
- statistics. Some neural networks can learn successfully only from
- noise-free data (e.g., ART or the perceptron rule) and therefore
- would not be considered statistical methods. But most neural
- networks that can learn to generalize effectively from noisy data
- are similar or identical to statistical methods. For example:
- o Feedforward nets with no hidden layer (including
- functional-link neural nets and higher-order neural nets)
- are basically generalized linear models.
- o Feedforward nets with one hidden layer are closely related
- to projection pursuit regression.
- o Probabilistic neural nets are identical to kernel
- discriminant analysis.
- o Kohonen nets for adaptive vector quantization are very
- similar to k-means cluster analysis.
- o Hebbian learning is closely related to principal component
- analysis.
- Some neural network areas that appear to have no close relatives
- in the existing statistical literature are:
- o Kohonen's self-organizing maps.
- o Reinforcement learning ((although this is treated in the
- operations research literature as Markov decision
- processes).
- o Stopped training (the purpose and effect of stopped training
- are similar to shrinkage estimation, but the method is quite
- different).
- Feedforward nets are a subset of the class of nonlinear regression
- and discrimination models. Statisticians have studied the
- properties of this general class but had not considered the specific
- case of feedforward neural nets before such networks were
- popularized in the neural network field. Still, many results from
- the statistical theory of nonlinear models apply directly to
- feedforward nets, and the methods that are commonly used for
- fitting nonlinear models, such as various Levenberg-Marquardt
- and conjugate gradient algorithms, can be used to train
- feedforward nets.
-
- While neural nets are often defined in terms of their algorithms or
- implementations, statistical methods are usually defined in terms
- of their results. The arithmetic mean, for example, can be
- computed by a (very simple) backprop net, by applying the usual
- formula SUM(x_i)/n, or by various other methods. What you get
- is still an arithmetic mean regardless of how you compute it. So a
- statistician would consider standard backprop, Quickprop, and
- Levenberg-Marquardt as different algorithms for implementing
- the same statistical model such as a feedforward net. On the other
- hand, different training criteria, such as least squares and cross
- entropy, are viewed by statisticians as fundamentally different
- estimation methods with different statistical properties.
-
- It is sometimes claimed that neural networks, unlike statistical
- models, require no distributional assumptions. In fact, neural
- networks involve exactly the same sort of distributional
- assumptions as statistical models, but statisticians study the
- consequences and importance of these assumptions while most
- neural networkers ignore them. For example, least-squares
- training methods are widely used by statisticians and neural
- networkers. Statisticians realize that least-squares training
- involves implicit distributional assumptions in that least-squares
- estimates have certain optimality properties for noise that is
- normally distributed with equal variance for all training cases and
- that is independent between different cases. These optimality
- properties are consequences of the fact that least-squares
- estimation is maximum likelihood under those conditions.
- Similarly, cross-entropy is maximum likelihood for noise with a
- Bernoulli distribution. If you study the distributional assumptions,
- then you can recognize and deal with violations of the
- assumptions. For example, if you have normally distributed noise
- but some training cases have greater noise variance than others,
- then you may be able to use weighted least squares instead of
- ordinary least squares to obtain more efficient estimates.
-
- Here are a few references:
-
- Chatfield, C. (1993), "Neural networks: Forecasting breakthrough
- or passing fad", International Journal of Forecasting, 9, 1-3.
-
- Cheng, B. and Titterington, D.M. (1994), "Neural Networks: A
- Review from a Statistical Perspective", Statistical Science, 9,
- 2-54.
-
- Geman, S., Bienenstock, E. and Doursat, R. (1992), "Neural
- Networks and the Bias/Variance Dilemma", Neural Computation,
- 4, 1-58.
-
- Kushner, H. & Clark, D. (1978), _Stochastic Approximation
- Methods for Constrained and Unconstrained Systems_,
- Springer-Verlag.
-
- Michie, D., Spiegelhalter, D.J. and Taylor, C.C. (1994), _Machine
- Learning, Neural and Statistical Classification_, Ellis Horwood.
-
- Ripley, B.D. (1993), "Statistical Aspects of Neural Networks", in
- O.E. Barndorff-Nielsen, J.L. Jensen and W.S. Kendall, eds.,
- _Networks and Chaos: Statistical and Probabilistic Aspects_,
- Chapman & Hall. ISBN 0 412 46530 2.
-
- Sarle, W.S. (1994), "Neural Networks and Statistical Models,"
- Proceedings of the Nineteenth Annual SAS Users Group
- International Conference, Cary, NC: SAS Institute, pp 1538-1550.
- ( ftp://ftp.sas.com/pub/sugi19/neural/neural1.ps)
-
- White, H. (1989), "Learning in Artificial Neural Networks: A
- Statistical Perspective," Neural Computation, 1, 425-464.
-
- White, H. (1992), _Artificial Neural Networks: Approximation
- and Learning Theory_, Blackwell.
-
- ------------------------------------------------------------------------
-
- 12. A: Good introductory literature about Neural
- ================================================
- Networks?
- =========
-
- 0.) The best (subjectively, of course -- please don't flame me):
- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
- Haykin, S. (1994). Neural Networks, a Comprehensive
- Foundation. Macmillan, New York, NY. "A very readable, well
- written intermediate to advanced text on NNs Perspective is
- primarily one of pattern recognition, estimation and signal
- processing. However, there are well-written chapters on
- neurodynamics and VLSI implementation. Though there is
- emphasis on formal mathematical models of NNs as universal
- approximators, statistical estimators, etc., there are also examples
- of NNs used in practical applications. The problem sets at the end
- of each chapter nicely complement the material. In the
- bibliography are over 1000 references. If one buys only one book
- on neural networks, this should be it."
-
- Hertz, J., Krogh, A., and Palmer, R. (1991). Introduction to the
- Theory of Neural Computation. Addison-Wesley: Redwood City,
- California. ISBN 0-201-50395-6 (hardbound) and
- 0-201-51560-1 (paperbound) Comments: "My first impression is
- that this one is by far the best book on the topic. And it's below
- $30 for the paperback."; "Well written, theoretical (but not
- overwhelming)"; It provides a good balance of model development,
- computational algorithms, and applications. The mathematical
- derivations are especially well done"; "Nice mathematical analysis
- on the mechanism of different learning algorithms"; "It is NOT
- for mathematical beginner. If you don't have a good grasp of
- higher level math, this book can be really tough to get through."
-
- Masters,Timothy (1994). Practical Neural Network Recipes in
- C++. Academic Press, ISBN 0-12-479040-2, US $45 incl. disks.
- "Lots of very good practical advice which most other books lack."
-
- 1.) Books for the beginner:
- +++++++++++++++++++++++++++
-
- Aleksander, I. and Morton, H. (1990). An Introduction to Neural
- Computing. Chapman and Hall. (ISBN 0-412-37780-2).
- Comments: "This book seems to be intended for the first year of
- university education."
-
- Beale, R. and Jackson, T. (1990). Neural Computing, an
- Introduction. Adam Hilger, IOP Publishing Ltd : Bristol. (ISBN
- 0-85274-262-2). Comments: "It's clearly written. Lots of hints as
- to how to get the adaptive models covered to work (not always
- well explained in the original sources). Consistent mathematical
- terminology. Covers perceptrons, error-backpropagation, Kohonen
- self-org model, Hopfield type models, ART, and associative
- memories."
-
- Dayhoff, J. E. (1990). Neural Network Architectures: An
- Introduction. Van Nostrand Reinhold: New York. Comments:
- "Like Wasserman's book, Dayhoff's book is also very easy to
- understand".
-
- Fausett, L. V. (1994). Fundamentals of Neural Networks:
- Architectures, Algorithms and Applications, Prentice Hall, ISBN
- 0-13-334186-0. Also published as a Prentice Hall International
- Edition, ISBN 0-13-042250-9. Sample softeware (source code
- listings in C and Fortran) is included in an Instructor's Manual.
- "Intermediate in level between Wasserman and
- Hertz/Krogh/Palmer. Algorithms for a broad range of neural
- networks, including a chapter on Adaptive Resonace Theory with
- ART2. Simple examples for each network."
-
- Freeman, James (1994). Simulating Neural Networks with
- Mathematica, Addison-Wesley, ISBN: 0-201-56629-X. Helps
- the reader make his own NNs. The mathematica code for the
- programs in the book is also available through the internet: Send
- mail to MathSource@wri.com or try http://www.wri.com/ on the
- World Wide Web.
-
- Hecht-Nielsen, R. (1990). Neurocomputing. Addison Wesley.
- Comments: "A good book", "comprises a nice historical overview
- and a chapter about NN hardware. Well structured prose. Makes
- important concepts clear."
-
- McClelland, J. L. and Rumelhart, D. E. (1988). Explorations in
- Parallel Distributed Processing: Computational Models of
- Cognition and Perception (software manual). The MIT Press.
- Comments: "Written in a tutorial style, and includes 2 diskettes of
- NN simulation programs that can be compiled on MS-DOS or
- Unix (and they do too !)"; "The programs are pretty reasonable as
- an introduction to some of the things that NNs can do."; "There
- are *two* editions of this book. One comes with disks for the IBM
- PC, the other comes with disks for the Macintosh".
-
- McCord Nelson, M. and Illingworth, W.T. (1990). A Practical
- Guide to Neural Nets. Addison-Wesley Publishing Company, Inc.
- (ISBN 0-201-52376-0). Comments: "No formulas at all"; "It
- does not have much detailed model development (very few
- equations), but it does present many areas of application. It
- includes a chapter on current areas of research. A variety of
- commercial applications is discussed in chapter 1. It also includes a
- program diskette with a fancy graphical interface (unlike the PDP
- diskette)".
-
- Muller, B. and Reinhardt, J. (1990). Neural Networks, An
- Introduction. Springer-Verlag: Berlin Heidelberg New York
- (ISBN: 3-540-52380-4 and 0-387-52380-4). Comments: The
- book was developed out of a course on neural-network models
- with computer demonstrations that was taught by the authors to
- Physics students. The book comes together with a PC-diskette.
- The book is divided into three parts: (1) Models of Neural
- Networks; describing several architectures and learing rules,
- including the mathematics. (2) Statistical Physiscs of Neural
- Networks; "hard-core" physics section developing formal theories
- of stochastic neural networks. (3) Computer Codes; explanation
- about the demonstration programs. First part gives a nice
- introduction into neural networks together with the formulas.
- Together with the demonstration programs a 'feel' for neural
- networks can be developed.
-
- Orchard, G.A. & Phillips, W.A. (1991). Neural Computation: A
- Beginner's Guide. Lawrence Earlbaum Associates: London.
- Comments: "Short user-friendly introduction to the area, with a
- non-technical flavour. Apparently accompanies a software
- package, but I haven't seen that yet".
-
- Rao, V.B & H.V. (1993). C++ Neural Networks and Fuzzy Logic.
- MIS:Press, ISBN 1-55828-298-x, US $45 incl. disks. "Probably
- not 'leading edge' stuff but detailed enough to get your hands
- dirty!"
-
- Wasserman, P. D. (1989). Neural Computing: Theory & Practice.
- Van Nostrand Reinhold: New York. (ISBN 0-442-20743-3)
- Comments: "Wasserman flatly enumerates some common
- architectures from an engineer's perspective ('how it works')
- without ever addressing the underlying fundamentals ('why it
- works') - important basic concepts such as clustering, principal
- components or gradient descent are not treated. It's also full of
- errors, and unhelpful diagrams drawn with what appears to be
- PCB board layout software from the '70s. For anyone who wants
- to do active research in the field I consider it quite inadequate";
- "Okay, but too shallow"; "Quite easy to understand"; "The best
- bedtime reading for Neural Networks. I have given this book to
- numerous collegues who want to know NN basics, but who never
- plan to implement anything. An excellent book to give your
- manager."
-
- Wasserman, P.D. (1993). Advanced Methods in Neural
- Computing. Van Nostrand Reinhold: New York (ISBN:
- 0-442-00461-3). Comments: Several neural network topics are
- discussed e.g. Probalistic Neural Networks, Backpropagation and
- beyond, neural control, Radial Basis Function Networks, Neural
- Engineering. Furthermore, several subjects related to neural
- networks are mentioned e.g. genetic algorithms, fuzzy logic, chaos.
- Just the functionality of these subjects is described; enough to get
- you started. Lots of references are given to more elaborate
- descriptions. Easy to read, no extensive mathematical background
- necessary.
-
- 2.) The classics:
- +++++++++++++++++
-
- Kohonen, T. (1984). Self-organization and Associative Memory.
- Springer-Verlag: New York. (2nd Edition: 1988; 3rd edition:
- 1989). Comments: "The section on Pattern mathematics is
- excellent."
-
- Rumelhart, D. E. and McClelland, J. L. (1986). Parallel
- Distributed Processing: Explorations in the Microstructure of
- Cognition (volumes 1 & 2). The MIT Press. Comments: "As a
- computer scientist I found the two Rumelhart and McClelland
- books really heavy going and definitely not the sort of thing to
- read if you are a beginner."; "It's quite readable, and affordable
- (about $65 for both volumes)."; "THE Connectionist bible".
-
- 3.) Introductory journal articles:
- ++++++++++++++++++++++++++++++++++
-
- Hinton, G. E. (1989). Connectionist learning procedures. Artificial
- Intelligence, Vol. 40, pp. 185--234. Comments: "One of the better
- neural networks overview papers, although the distinction
- between network topology and learning algorithm is not always
- very clear. Could very well be used as an introduction to neural
- networks."
-
- Knight, K. (1990). Connectionist, Ideas and Algorithms.
- Communications of the ACM. November 1990. Vol.33 nr.11, pp
- 59-74. Comments:"A good article, while it is for most people easy
- to find a copy of this journal."
-
- Kohonen, T. (1988). An Introduction to Neural Computing.
- Neural Networks, vol. 1, no. 1. pp. 3-16. Comments: "A general
- review".
-
- 4.) Not-quite-so-introductory literature:
- +++++++++++++++++++++++++++++++++++++++++
-
- Anderson, J. A. and Rosenfeld, E. (Eds). (1988). Neurocomputing:
- Foundations of Research. The MIT Press: Cambridge, MA.
- Comments: "An expensive book, but excellent for reference. It is a
- collection of reprints of most of the major papers in the field."
-
- Anderson, J. A., Pellionisz, A. and Rosenfeld, E. (Eds). (1990).
- Neurocomputing 2: Directions for Research. The MIT Press:
- Cambridge, MA. Comments: "The sequel to their well-known
- Neurocomputing book."
-
- Caudill, M. and Butler, C. (1990). Naturally Intelligent Systems.
- MIT Press: Cambridge, Massachusetts. (ISBN 0-262-03156-6).
- Comments: "I guess one of the best books I read"; "May not be
- suited for people who want to do some research in the area".
-
- Cichocki, A. and Unbehauen, R. (1994). Neural Networks for
- Optimization and Signal Processing. John Wiley & Sons, West
- Sussex, England, 1993, ISBN 0-471-930105 (hardbound), 526
- pages, $57.95. "Partly a textbook and partly a research
- monograph; introduces the basic concepts, techniques, and models
- related to neural networks and optimization, excluding rigorous
- mathematical details. Accessible to a wide readership with a
- differential calculus background. The main coverage of the book is
- on recurrent neural networks with continuous state variables. The
- book title would be more appropriate without mentioning signal
- processing. Well edited, good illustrations."
-
- Khanna, T. (1990). Foundations of Neural Networks.
- Addison-Wesley: New York. Comments: "Not so bad (with a
- page of erroneous formulas (if I remember well), and #hidden
- layers isn't well described)."; "Khanna's intention in writing his
- book with math analysis should be commended but he made
- several mistakes in the math part".
-
- Kung, S.Y. (1993). Digital Neural Networks, Prentice Hall,
- Englewood Cliffs, NJ.
-
- Levine, D. S. (1990). Introduction to Neural and Cognitive
- Modeling. Lawrence Erlbaum: Hillsdale, N.J. Comments: "Highly
- recommended".
-
- Lippmann, R. P. (April 1987). An introduction to computing with
- neural nets. IEEE Acoustics, Speech, and Signal Processing
- Magazine. vol. 2, no. 4, pp 4-22. Comments: "Much acclaimed as
- an overview of neural networks, but rather inaccurate on several
- points. The categorization into binary and continuous- valued
- input neural networks is rather arbitrary, and may work confusing
- for the unexperienced reader. Not all networks discussed are of
- equal importance."
-
- Maren, A., Harston, C. and Pap, R., (1990). Handbook of Neural
- Computing Applications. Academic Press. ISBN: 0-12-471260-6.
- (451 pages) Comments: "They cover a broad area"; "Introductory
- with suggested applications implementation".
-
- Pao, Y. H. (1989). Adaptive Pattern Recognition and Neural
- Networks Addison-Wesley Publishing Company, Inc. (ISBN
- 0-201-12584-6) Comments: "An excellent book that ties together
- classical approaches to pattern recognition with Neural Nets. Most
- other NN books do not even mention conventional approaches."
-
- Rumelhart, D. E., Hinton, G. E. and Williams, R. J. (1986).
- Learning representations by back-propagating errors. Nature, vol
- 323 (9 October), pp. 533-536. Comments: "Gives a very good
- potted explanation of backprop NN's. It gives sufficient detail to
- write your own NN simulation."
-
- Simpson, P. K. (1990). Artificial Neural Systems: Foundations,
- Paradigms, Applications and Implementations. Pergamon Press:
- New York. Comments: "Contains a very useful 37 page
- bibliography. A large number of paradigms are presented. On the
- negative side the book is very shallow. Best used as a complement
- to other books".
-
- Zeidenberg. M. (1990). Neural Networks in Artificial Intelligence.
- Ellis Horwood, Ltd., Chichester. Comments: "Gives the AI point
- of view".
-
- Zornetzer, S. F., Davis, J. L. and Lau, C. (1990). An Introduction
- to Neural and Electronic Networks. Academic Press. (ISBN
- 0-12-781881-2) Comments: "Covers quite a broad range of
- topics (collection of articles/papers )."; "Provides a primer-like
- introduction and overview for a broad audience, and employs a
- strong interdisciplinary emphasis".
-
- ------------------------------------------------------------------------
-
- 13. A: Any journals and magazines about Neural
- ==============================================
- Networks?
- =========
-
- [to be added: comments on speed of reviewing and publishing,
- whether they accept TeX format or ASCII by e-mail, etc.]
-
- A. Dedicated Neural Network Journals:
- +++++++++++++++++++++++++++++++++++++
-
- Title: Neural Networks
- Publish: Pergamon Press
- Address: Pergamon Journals Inc., Fairview Park, Elmsford,
- New York 10523, USA and Pergamon Journals Ltd.
- Headington Hill Hall, Oxford OX3, 0BW, England
- Freq.: 10 issues/year (vol. 1 in 1988)
- Cost/Yr: Free with INNS or JNNS or ENNS membership ($45?),
- Individual $65, Institution $175
- ISSN #: 0893-6080
- Remark: Official Journal of International Neural Network Society (INNS),
- European Neural Network Society (ENNS) and Japanese Neural
- Network Society (JNNS).
- Contains Original Contributions, Invited Review Articles, Letters
- to Editor, Book Reviews, Editorials, Announcements, Software Surveys.
-
- Title: Neural Computation
- Publish: MIT Press
- Address: MIT Press Journals, 55 Hayward Street Cambridge,
- MA 02142-9949, USA, Phone: (617) 253-2889
- Freq.: Quarterly (vol. 1 in 1989)
- Cost/Yr: Individual $45, Institution $90, Students $35; Add $9 Outside USA
- ISSN #: 0899-7667
- Remark: Combination of Reviews (10,000 words), Views (4,000 words)
- and Letters (2,000 words). I have found this journal to be of
- outstanding quality.
- (Note: Remarks supplied by Mike Plonski "plonski@aero.org")
-
- Title: IEEE Transactions on Neural Networks
- Publish: Institute of Electrical and Electronics Engineers (IEEE)
- Address: IEEE Service Cemter, 445 Hoes Lane, P.O. Box 1331, Piscataway, NJ,
- 08855-1331 USA. Tel: (201) 981-0060
- Cost/Yr: $10 for Members belonging to participating IEEE societies
- Freq.: Quarterly (vol. 1 in March 1990)
- Remark: Devoted to the science and technology of neural networks
- which disclose significant technical knowledge, exploratory
- developments and applications of neural networks from biology to
- software to hardware. Emphasis is on artificial neural networks.
- Specific aspects include self organizing systems, neurobiological
- connections, network dynamics and architecture, speech recognition,
- electronic and photonic implementation, robotics and controls.
- Includes Letters concerning new research results.
- (Note: Remarks are from journal announcement)
-
- Title: International Journal of Neural Systems
- Publish: World Scientific Publishing
- Address: USA: World Scientific Publishing Co., 1060 Main Street, River Edge,
- NJ 07666. Tel: (201) 487 9655; Europe: World Scientific Publishing
- Co. Ltd., 57 Shelton Street, London WC2H 9HE, England.
- Tel: (0171) 836 0888; Asia: World Scientific Publishing Co. Pte. Ltd.,
- 1022 Hougang Avenue 1 #05-3520, Singapore 1953, Rep. of Singapore
- Tel: 382 5663.
- Freq.: Quarterly (Vol. 1 in 1990)
- Cost/Yr: Individual $122, Institution $255 (plus $15-$25 for postage)
- ISSN #: 0129-0657 (IJNS)
- Remark: The International Journal of Neural Systems is a quarterly
- journal which covers information processing in natural
- and artificial neural systems. Contributions include research papers,
- reviews, and Letters to the Editor - communications under 3,000
- words in length, which are published within six months of receipt.
- Other contributions are typically published within nine months.
- The journal presents a fresh undogmatic attitude towards this
- multidisciplinary field and aims to be a forum for novel ideas and
- improved understanding of collective and cooperative phenomena with
- computational capabilities.
- Papers should be submitted to World Scientific's UK office. Once a
- paper is accepted for publication, authors are invited to e-mail
- the LaTeX source file of their paper in order to expedite publication.
-
- Title: International Journal of Neurocomputing
- Publish: Elsevier Science Publishers, Journal Dept.; PO Box 211;
- 1000 AE Amsterdam, The Netherlands
- Freq.: Quarterly (vol. 1 in 1989)
- Editor: V.D. Sanchez A.; German Aerospace Research Establishment;
- Institute for Robotics and System Dynamics, 82230 Wessling, Germany.
- Current events and software news editor: Dr. F. Murtagh, ESA,
- Karl-Schwarzschild Strasse 2, D-85748, Garching, Germany,
- phone +49-89-32006298, fax +49-89-32006480, email fmurtagh@eso.org
-
- Title: Neural Processing Letters
- Publish: D facto publications
- Address: 45 rue Masui; B-1210 Brussels, Belgium
- Phone: (32) 2 245 43 63; Fax: (32) 2 245 46 94
- Freq: 6 issues/year (vol. 1 in September 1994)
- Cost/Yr: BEF 4400 (about $140)
- ISSN #: 1370-4621
- Remark: The aim of the journal is to rapidly publish new ideas, original
- developments and work in progress. Neural Processing Letters
- covers all aspects of the Artificial Neural Networks field.
- Publication delay is about 3 months.
- FTP server available:
- ftp://ftp.dice.ucl.ac.be/pub/neural-nets/NPL.
- WWW server available:
- http://www.dice.ucl.ac.be/neural-nets/NPL/NPL.html
-
- Title: Neural Network News
- Publish: AIWeek Inc.
- Address: Neural Network News, 2555 Cumberland Parkway, Suite 299,
- Atlanta, GA 30339 USA. Tel: (404) 434-2187
- Freq.: Monthly (beginning September 1989)
- Cost/Yr: USA and Canada $249, Elsewhere $299
- Remark: Commericial Newsletter
-
- Title: Network: Computation in Neural Systems
- Publish: IOP Publishing Ltd
- Address: Europe: IOP Publishing Ltd, Techno House, Redcliffe Way, Bristol
- BS1 6NX, UK; IN USA: American Institute of Physics, Subscriber
- Services 500 Sunnyside Blvd., Woodbury, NY 11797-2999
- Freq.: Quarterly (1st issue 1990)
- Cost/Yr: USA: $180, Europe: 110 pounds
- Remark: Description: "a forum for integrating theoretical and experimental
- findings across relevant interdisciplinary boundaries." Contents:
- Submitted articles reviewed by two technical referees paper's
- interdisciplinary format and accessability." Also Viewpoints and
- Reviews commissioned by the editors, abstracts (with reviews) of
- articles published in other journals, and book reviews.
- Comment: While the price discourages me (my comments are based
- upon a free sample copy), I think that the journal succeeds
- very well. The highest density of interesting articles I
- have found in any journal.
- (Note: Remarks supplied by kehoe@csufres.CSUFresno.EDU)
-
- Title: Connection Science: Journal of Neural Computing,
- Artificial Intelligence and Cognitive Research
- Publish: Carfax Publishing
- Address: Europe: Carfax Publishing Company, P. O. Box 25, Abingdon,
- Oxfordshire OX14 3UE, UK. USA: Carafax Publishing Company,
- 85 Ash Street, Hopkinton, MA 01748
- Freq.: Quarterly (vol. 1 in 1989)
- Cost/Yr: Individual $82, Institution $184, Institution (U.K.) 74 pounds
-
- Title: International Journal of Neural Networks
- Publish: Learned Information
- Freq.: Quarterly (vol. 1 in 1989)
- Cost/Yr: 90 pounds
- ISSN #: 0954-9889
- Remark: The journal contains articles, a conference report (at least the
- issue I have), news and a calendar.
- (Note: remark provided by J.R.M. Smits "anjos@sci.kun.nl")
-
- Title: Sixth Generation Systems (formerly Neurocomputers)
- Publish: Gallifrey Publishing
- Address: Gallifrey Publishing, PO Box 155, Vicksburg, Michigan, 49097, USA
- Tel: (616) 649-3772, 649-3592 fax
- Freq. Monthly (1st issue January, 1987)
- ISSN #: 0893-1585
- Editor: Derek F. Stubbs
- Cost/Yr: $79 (USA, Canada), US$95 (elsewhere)
- Remark: Runs eight to 16 pages monthly. In 1995 will go to floppy disc-based
- publishing with databases +, "the equivalent to 50 pages per issue are
- planned." Often focuses on specific topics: e.g., August, 1994 contains two
- articles: "Economics, Times Series and the Market," and "Finite Particle
- Analysis - [part] II." Stubbs also directs the company Advanced Forecasting
- Technologies. (Remark by Ed Rosenfeld: ier@aol.com)
-
- Title: JNNS Newsletter (Newsletter of the Japan Neural Network Society)
- Publish: The Japan Neural Network Society
- Freq.: Quarterly (vol. 1 in 1989)
- Remark: (IN JAPANESE LANGUAGE) Official Newsletter of the Japan Neural
- Network Society(JNNS)
- (Note: remarks by Osamu Saito "saito@nttica.NTT.JP")
-
- Title: Neural Networks Today
- Remark: I found this title in a bulletin board of october last year.
- It was a message of Tim Pattison, timpatt@augean.OZ
- (Note: remark provided by J.R.M. Smits "anjos@sci.kun.nl")
-
- Title: Computer Simulations in Brain Science
-
- Title: Internation Journal of Neuroscience
-
- Title: Neural Network Computation
- Remark: Possibly the same as "Neural Computation"
-
- Title: Neural Computing and Applications
- Freq.: Quarterly
- Publish: Springer Verlag
- Cost/yr: 120 Pounds
- Remark: Is the journal of the Neural Computing Applications Forum.
- Publishes original research and other information
- in the field of practical applications of neural computing.
-
- B. NN Related Journals:
- +++++++++++++++++++++++
-
- Title: Complex Systems
- Publish: Complex Systems Publications
- Address: Complex Systems Publications, Inc., P.O. Box 6149, Champaign,
- IL 61821-8149, USA
- Freq.: 6 times per year (1st volume is 1987)
- ISSN #: 0891-2513
- Cost/Yr: Individual $75, Institution $225
- Remark: Journal COMPLEX SYSTEMS devotes to rapid publication of research
- on science, mathematics, and engineering of systems with simple
- components but complex overall behavior. Send mail to
- "jcs@complex.ccsr.uiuc.edu" for additional info.
- (Remark is from announcement on Net)
-
- Title: Biological Cybernetics (Kybernetik)
- Publish: Springer Verlag
- Remark: Monthly (vol. 1 in 1961)
-
- Title: Various IEEE Transactions and Magazines
- Publish: IEEE
- Remark: Primarily see IEEE Trans. on System, Man and Cybernetics;
- Various Special Issues: April 1990 IEEE Control Systems
- Magazine.; May 1989 IEEE Trans. Circuits and Systems.;
- July 1988 IEEE Trans. Acoust. Speech Signal Process.
-
- Title: The Journal of Experimental and Theoretical Artificial Intelligence
- Publish: Taylor & Francis, Ltd.
- Address: London, New York, Philadelphia
- Freq.: ? (1st issue Jan 1989)
- Remark: For submission information, please contact either of the editors:
- Eric Dietrich Chris Fields
- PACSS - Department of Philosophy Box 30001/3CRL
- SUNY Binghamton New Mexico State University
- Binghamton, NY 13901 Las Cruces, NM 88003-0001
- dietrich@bingvaxu.cc.binghamton.edu cfields@nmsu.edu
-
- Title: The Behavioral and Brain Sciences
- Publish: Cambridge University Press
- Remark: (Expensive as hell, I'm sure.)
- This is a delightful journal that encourages discussion on a
- variety of controversial topics. I have especially enjoyed
- reading some papers in there by Dana Ballard and Stephen
- Grossberg (separate papers, not collaborations) a few years
- back. They have a really neat concept: they get a paper,
- then invite a number of noted scientists in the field to
- praise it or trash it. They print these commentaries, and
- give the author(s) a chance to make a rebuttal or
- concurrence. Sometimes, as I'm sure you can imagine, things
- get pretty lively. I'm reasonably sure they are still at
- it--I think I saw them make a call for reviewers a few
- months ago. Their reviewers are called something like
- Behavioral and Brain Associates, and I believe they have to
- be nominated by current associates, and should be fairly
- well established in the field. That's probably more than I
- really know about it but maybe if you post it someone who
- knows more about it will correct any errors I have made.
- The main thing is that I liked the articles I read. (Note:
- remarks by Don Wunsch )
-
- Title: International Journal of Applied Intelligence
- Publish: Kluwer Academic Publishers
- Remark: first issue in 1990(?)
-
- Title: Bulletin of Mathematica Biology
-
- Title: Intelligence
-
- Title: Journal of Mathematical Biology
-
- Title: Journal of Complex System
-
- Title: AI Expert
- Publish: Miller Freeman Publishing Co., for subscription call ++415-267-7672.
- Remark: Regularly includes ANN related articles, product
- announcements, and application reports. Listings of ANN
- programs are available on AI Expert affiliated BBS's
-
- Title: International Journal of Modern Physics C
- Publish: USA: World Scientific Publishing Co., 1060 Main Street, River Edge,
- NJ 07666. Tel: (201) 487 9655; Europe: World Scientific Publishing
- Co. Ltd., 57 Shelton Street, London WC2H 9HE, England.
- Tel: (0171) 836 0888; Asia: World Scientific Publishing Co. Pte. Ltd.,
- 1022 Hougang Avenue 1 #05-3520, Singapore 1953, Rep. of Singapore
- Tel: 382 5663.
- Freq: bi-monthly
- Eds: H. Herrmann, R. Brower, G.C. Fox and S Nose
-
- Title: Machine Learning
- Publish: Kluwer Academic Publishers
- Address: Kluwer Academic Publishers
- P.O. Box 358
- Accord Station
- Hingham, MA 02018-0358 USA
- Freq.: Monthly (8 issues per year; increasing to 12 in 1993)
- Cost/Yr: Individual $140 (1992); Member of AAAI or CSCSI $88
- Remark: Description: Machine Learning is an international forum for
- research on computational approaches to learning. The journal
- publishes articles reporting substantive research results on a
- wide range of learning methods applied to a variety of task
- domains. The ideal paper will make a theoretical contribution
- supported by a computer implementation.
- The journal has published many key papers in learning theory,
- reinforcement learning, and decision tree methods. Recently
- it has published a special issue on connectionist approaches
- to symbolic reasoning. The journal regularly publishes
- issues devoted to genetic algorithms as well.
-
- Title: INTELLIGENCE - The Future of Computing
- Published by: Intelligence
- Address: INTELLIGENCE, P.O. Box 20008, New York, NY 10025-1510, USA,
- 212-222-1123 voice & fax; email: ier@aol.com, CIS: 72400,1013
- Freq. Monthly plus four special reports each year (1st issue: May, 1984)
- ISSN #: 1042-4296
- Editor: Edward Rosenfeld
- Cost/Yr: $395 (USA), US$450 (elsewhere)
- Remark: Has absorbed several other newsletters, like Synapse/Connection
- and Critical Technology Trends (formerly AI Trends).
- Covers NN, genetic algorithms, fuzzy systems, wavelets, chaos
- and other advanced computing approaches, as well as molecular
- computing and nanotechnology.
-
- Title: Journal of Physics A: Mathematical and General
- Publish: Inst. of Physics, Bristol
- Freq: 24 issues per year.
- Remark: Statistical mechanics aspects of neural networks
- (mostly Hopfield models).
-
- Title: Physical Review A: Atomic, Molecular and Optical Physics
- Publish: The American Physical Society (Am. Inst. of Physics)
- Freq: Monthly
- Remark: Statistical mechanics of neural networks.
-
- Title: Information Sciences
- Publish: North Holland (Elsevier Science)
- Freq.: Monthly
- ISSN: 0020-0255
- Editor: Paul P. Wang; Department of Electrical Engineering; Duke University;
- Durham, NC 27706, USA
-
- C. Journals loosely related to NNs:
- +++++++++++++++++++++++++++++++++++
-
- Title: JOURNAL OF COMPLEXITY
- Remark: (Must rank alongside Wolfram's Complex Systems)
-
- Title: IEEE ASSP Magazine
- Remark: (April 1987 had the Lippmann intro. which everyone likes to cite)
-
- Title: ARTIFICIAL INTELLIGENCE
- Remark: (Vol 40, September 1989 had the survey paper by Hinton)
-
- Title: COGNITIVE SCIENCE
- Remark: (the Boltzmann machine paper by Ackley et al appeared here
- in Vol 9, 1983)
-
- Title: COGNITION
- Remark: (Vol 28, March 1988 contained the Fodor and Pylyshyn
- critique of connectionism)
-
- Title: COGNITIVE PSYCHOLOGY
- Remark: (no comment!)
-
- Title: JOURNAL OF MATHEMATICAL PSYCHOLOGY
- Remark: (several good book reviews)
-
- ------------------------------------------------------------------------
-
- 14. A: The most important conferences concerned
- ===============================================
- with Neural Networks?
- =====================
-
- [to be added: has taken place how often yet; most emphasized topics;
- where to get proceedings/calls-for-papers etc. ]
-
- A. Dedicated Neural Network Conferences:
- ++++++++++++++++++++++++++++++++++++++++
-
- 1. Neural Information Processing Systems (NIPS) Annually
- since 1988 in Denver, Colorado; late November or early
- December. Interdisciplinary conference with computer
- science, physics, engineering, biology, medicine, cognitive
- science topics. Covers all aspects of NNs. Proceedings
- appear several months after the conference as a book from
- Morgan Kaufman, San Mateo, CA.
- 2. International Joint Conference on Neural Networks
- (IJCNN) formerly co-sponsored by INNS and IEEE, no
- longer held.
- 3. Annual Conference on Neural Networks (ACNN)
- 4. International Conference on Artificial Neural Networks
- (ICANN) Annually in Europe. First was 1991. Major
- conference of European Neur. Netw. Soc. (ENNS)
- 5. WCNN. Sponsored by INNS.
- 6. European Symposium on Artificial Neural Networks
- (ESANN). Anually since 1993 in Brussels, Belgium; late
- April; conference on the fundamental aspects of artificial
- neural networks: theory, mathematics, biology, relations
- between neural networks and other disciplines, statistics,
- learning, algorithms, models and architectures,
- self-organization, signal processing, approximation of
- functions, evolutive learning, etc. Contact: Michel
- Verleysen, D facto conference services, 45 rue Masui,
- B-1210 Brussels, Belgium, phone: +32 2 245 43 63, fax: +
- 32 2 245 46 94, e-mail: esann@dice.ucl.ac.be
- 7. Artificial Neural Networks in Engineering (ANNIE)
- Anually since 1991 in St. Louis, Missouri; held in
- November. (Topics: NN architectures, pattern recognition,
- neuro-control, neuro-engineering systems. Contact:
- ANNIE; Engineering Management Department; 223
- Engineering Management Building; University of
- Missouri-Rolla; Rolla, MO 65401; FAX: (314) 341-6567)
- 8. many many more....
-
- B. Other Conferences
- ++++++++++++++++++++
-
- 1. International Joint Conference on Artificial Intelligence
- (IJCAI)
- 2. Intern. Conf. on Acustics, Speech and Signal Processing
- (ICASSP)
- 3. Intern. Conf. on Pattern Recognition. Held every other
- year. Has a connectionist subconference. Information:
- General Chair Walter G. Kropatsch
- <krw@prip.tuwien.ac.at>
- 4. Annual Conference of the Cognitive Science Society
- 5. [Vision Conferences?]
-
- C. Pointers to Conferences
- ++++++++++++++++++++++++++
-
- 1. The journal "Neural Networks" has a list of conferences,
- workshops and meetings in each issue. This is quite
- interdisciplinary.
- 2. There is a regular posting on comp.ai.neural-nets from
- Paultje Bakker: "Upcoming Neural Network Conferences",
- which lists names, dates, locations, contacts, and deadlines.
- It is also available for anonymous ftp from ftp.cs.uq.oz.au
- as /pub/pdp/conferences
-
- ------------------------------------------------------------------------
-
- 15. A: Neural Network Associations?
- ===================================
-
- 1. International Neural Network Society (INNS).
- +++++++++++++++++++++++++++++++++++++++++++++++
-
- INNS membership includes subscription to "Neural
- Networks", the official journal of the society. Membership
- is $55 for non-students and $45 for students per year.
- Address: INNS Membership, P.O. Box 491166, Ft.
- Washington, MD 20749.
-
- 2. International Student Society for Neural Networks
- ++++++++++++++++++++++++++++++++++++++++++++++++++++
- (ISSNNets).
- +++++++++++
-
- Membership is $5 per year. Address: ISSNNet, Inc., P.O.
- Box 15661, Boston, MA 02215 USA
-
- 3. Women In Neural Network Research and technology
- ++++++++++++++++++++++++++++++++++++++++++++++++++
- (WINNERS).
- ++++++++++
-
- Address: WINNERS, c/o Judith Dayhoff, 11141 Georgia
- Ave., Suite 206, Wheaton, MD 20902. Phone:
- 301-933-9000.
-
- 4. European Neural Network Society (ENNS)
- +++++++++++++++++++++++++++++++++++++++++
-
- ENNS membership includes subscription to "Neural
- Networks", the official journal of the society. Membership
- is currently (1994) 50 UK pounds (35 UK pounds for
- students) per year. Address: ENNS Membership, Centre for
- Neural Networks, King's College London, Strand, London
- WC2R 2LS, United Kingdom.
-
- 5. Japanese Neural Network Society (JNNS)
- +++++++++++++++++++++++++++++++++++++++++
-
- Address: Japanese Neural Network Society; Department of
- Engineering, Tamagawa University; 6-1-1, Tamagawa
- Gakuen, Machida City, Tokyo; 194 JAPAN; Phone: +81
- 427 28 3457, Fax: +81 427 28 3597
-
- 6. Association des Connexionnistes en THese (ACTH)
- ++++++++++++++++++++++++++++++++++++++++++++++++++
-
- (the French Student Association for Neural Networks);
- Membership is 100 FF per year; Activities : newsletter,
- conference (every year), list of members, electronic forum;
- Journal 'Valgo' (ISSN 1243-4825); Contact : acth@loria.fr
-
- 7. Neurosciences et Sciences de l'Ingenieur (NSI)
- +++++++++++++++++++++++++++++++++++++++++++++++++
-
- Biology & Computer Science Activity : conference (every
- year) Address : NSI - TIRF / INPG 46 avenue Felix
- Viallet 38031 Grenoble Cedex FRANCE
-
- ------------------------------------------------------------------------
-
- 16. A: Other sources of information about NNs?
- ==============================================
-
- 1. Neuron Digest
- ++++++++++++++++
-
- Internet Mailing List. From the welcome blurb:
- "Neuron-Digest is a list (in digest form) dealing with all
- aspects of neural networks (and any type of network or
- neuromorphic system)" To subscribe, send email to
- neuron-request@cattell.psych.upenn.edu
- comp.ai.neural-net readers also find the messages in that
- newsgroup in the form of digests.
-
- 2. Usenet groups comp.ai.neural-nets (Oha!) and
- +++++++++++++++++++++++++++++++++++++++++++++++
- comp.theory.self-org-sys.
- +++++++++++++++++++++++++
-
- There is a periodic posting on comp.ai.neural-nets sent by
- srctran@world.std.com (Gregory Aharonian) about Neural
- Network patents.
-
- 3. Central Neural System Electronic Bulletin Board
- ++++++++++++++++++++++++++++++++++++++++++++++++++
-
- Modem: 409-737-5222; Sysop: Wesley R. Elsberry; 4160
- Pirates' Beach, Galveston, TX 77554;
- welsberr@orca.tamu.edu. Many MS-DOS PD and
- shareware simulations, source code, benchmarks,
- demonstration packages, information files; some Unix,
- Macintosh, Amiga related files. Also available are files on
- AI, AI Expert listings 1986-1991, fuzzy logic, genetic
- algorithms, artificial life, evolutionary biology, and many
- Project Gutenberg and Wiretap etexts. No user fees have
- ever been charged. Home of the NEURAL_NET Echo,
- available thrugh FidoNet, RBBS-Net, and other EchoMail
- compatible bulletin board systems.
-
- 4. Neural ftp archive site ftp.funet.fi
- +++++++++++++++++++++++++++++++++++++++
-
- Is administrating a large collection of neural network
- papers and software at the Finnish University Network file
- archive site ftp.funet.fi in directory /pub/sci/neural Contains
- all the public domain software and papers that they have
- been able to find. All of these files have been transferred
- from FTP sites in U.S. and are mirrored about every 3
- months at fastest. Contact: neural-adm@ftp.funet.fi
-
- 5. USENET newsgroup comp.org.issnnet
- ++++++++++++++++++++++++++++++++++++
-
- Forum for discussion of academic/student-related issues in
- NNs, as well as information on ISSNNet (see answer 12)
- and its activities.
-
- 6. AI CD-ROM
- ++++++++++++
-
- Network Cybernetics Corporation produces the "AI
- CD-ROM". It is an ISO-9660 format CD-ROM and
- contains a large assortment of software related to artificial
- intelligence, artificial life, virtual reality, and other topics.
- Programs for OS/2, MS-DOS, Macintosh, UNIX, and
- other operating systems are included. Research papers,
- tutorials, and other text files are included in ASCII, RTF,
- and other universal formats. The files have been collected
- from AI bulletin boards, Internet archive sites, University
- computer deptartments, and other government and civilian
- AI research organizations. Network Cybernetics
- Corporation intends to release annual revisions to the AI
- CD-ROM to keep it up to date with current developments
- in the field. The AI CD-ROM includes collections of files
- that address many specific AI/AL topics including Neural
- Networks (Source code and executables for many different
- platforms including Unix, DOS, and Macintosh. ANN
- development tools, example networks, sample data,
- tutorials. A complete collection of Neural Digest is included
- as well.) The AI CD-ROM may be ordered directly by
- check, money order, bank draft, or credit card from:
- Network Cybernetics Corporation; 4201 Wingren Road
- Suite 202; Irving, TX 75062-2763; Tel 214/650-2002; Fax
- 214/650-1929; The cost is $129 per disc + shipping ($5/disc
- domestic or $10/disc foreign) (See the comp.ai FAQ for
- further details)
-
- 7. NN events server
- +++++++++++++++++++
-
- There is a WWW page and an FTP Server for
- Announcements of Conferences, Workshops and Other
- Events on Neural Networks at IDIAP in Switzerland.
- WWW-Server:
- http://www.idiap.ch/html/idiap-networks.html,
- FTP-Server: ftp://ftp.idiap.ch/html/NN-events/,
-
- 8. World Wide Web
- +++++++++++++++++
-
- In World-Wide-Web (WWW, for example via the xmosaic
- program) you can read neural network information for
- instance by opening one of the following uniform resource
- locators (URLs): http://www.neuronet.ph.kcl.ac.uk
- (NEuroNet, King's College, London),
- http://www.eeb.ele.tue.nl (Eindhoven, Netherlands),
- http://www.msrc.pnl.gov:2080/docs/cie/neural/neural.homepage.html
- (Richland, Washington),
- http://www.cosy.sbg.ac.at/~rschwaig/rschwaig/projects.html
- (Salzburg, Austria),
- http://http2.sils.umich.edu/Public/nirg/nirg1.html
- (Michigan). http://rtm.science.unitn.it/ Reactive Memory
- Search (Tabu Search) page (Trento, Italy). Many others
- are available too, changing daily.
-
- 9. Neurosciences Internet Resource Guide
- ++++++++++++++++++++++++++++++++++++++++
-
- This document aims to be a guide to existing, free,
- Internet-accessible resources helpful to neuroscientists of
- all stripes. An ASCII text version (86K) is available in the
- Clearinghouse of Subject-Oriented Internet Resource
- Guides as follows:
-
- anonymous FTP, Gopher, WWW Hypertext
-
- 10. INTCON mailing list
- +++++++++++++++++++++++
-
- INTCON (Intelligent Control) is a moderated mailing list
- set up to provide a forum for communication and exchange
- of ideas among researchers in neuro-control, fuzzy logic
- control, reinforcement learning and other related subjects
- grouped under the topic of intelligent control. Send your
- subscribe requests to
- intcon-request@phoenix.ee.unsw.edu.au
-
- ------------------------------------------------------------------------
-
- 17. A: Freely available software packages for NN
- ================================================
- simulation?
- ===========
-
- 1. Rochester Connectionist Simulator
- ++++++++++++++++++++++++++++++++++++
-
- A quite versatile simulator program for arbitrary types of
- neural nets. Comes with a backprop package and a
- X11/Sunview interface. Available via anonymous FTP
- from cs.rochester.edu in directory pub/packages/simulator
- as the files README (8 KB), rcs_v4.2.tar.Z (2.9 MB),
-
- 2. UCLA-SFINX
- +++++++++++++
-
- ftp retina.cs.ucla.edu [131.179.16.6]; Login name: sfinxftp;
- Password: joshua; directory: pub; files : README;
- sfinx_v2.0.tar.Z; Email info request :
- sfinx@retina.cs.ucla.edu
-
- 3. NeurDS
- +++++++++
-
- simulator for DEC systems supporting VT100 terminal.
- available for anonymous ftp from gatekeeper.dec.com
- [16.1.0.2] in directory: pub/DEC as the file
- NeurDS031.tar.Z (111 Kb)
-
- 4. PlaNet5.7 (formerly known as SunNet)
- +++++++++++++++++++++++++++++++++++++++
-
- A popular connectionist simulator with versions to run
- under X Windows, and non-graphics terminals created by
- Yoshiro Miyata (Chukyo Univ., Japan). 60-page User's
- Guide in Postscript. Send any questions to
- miyata@sccs.chukyo-u.ac.jp Available for anonymous ftp
- from ftp.ira.uka.de as /pub/neuron/PlaNet5.7.tar.Z (800 kb)
- or from boulder.colorado.edu [128.138.240.1] as
- /pub/generic-sources/PlaNet5.7.tar.Z
-
- 5. GENESIS
- ++++++++++
-
- GENESIS 1.4.2 (GEneral NEural SImulation System) is a
- general purpose simulation platform which was developed
- to support the simulation of neural systems ranging from
- complex models of single neurons to simulations of large
- networks made up of more abstract neuronal components.
- Most current GENESIS applications involve realistic
- simulations of biological neural systems. Although the
- software can also model more abstract networks, other
- simulators are more suitable for backpropagation and
- similar connectionist modeling. Available for ftp with the
- following procedure: Use 'telnet' to genesis.bbb.caltech.edu
- and login as the user "genesis" (no password). If you
- answer all the questions, an 'ftp' account will
- automatically be created for you. You can then 'ftp' back to
- the machine and download the software (about 3 MB).
- Contact: genesis@cns.caltech.edu. Further information via
- WWW at http://www.bbb.caltech.edu/GENESIS/.
-
- 6. Mactivation
- ++++++++++++++
-
- A neural network simulator for the Apple Macintosh.
- Available for ftp from ftp.cs.colorado.edu [128.138.243.151]
- as /pub/cs/misc/Mactivation-3.3.sea.hqx
-
- 7. Cascade Correlation Simulator
- ++++++++++++++++++++++++++++++++
-
- A simulator for Scott Fahlman's Cascade Correlation
- algorithm. Available for ftp from ftp.cs.cmu.edu
- [128.2.206.173] in directory /afs/cs/project/connect/code as
- the file cascor-v1.0.4.shar (218 KB) There is also a version
- of recurrent cascade correlation in the same directory in
- file rcc1.c (108 KB).
-
- 8. Quickprop
- ++++++++++++
-
- A variation of the back-propagation algorithm developed
- by Scott Fahlman. A simulator is available in the same
- directory as the cascade correlation simulator above in file
- nevprop1.16.shar (137 KB) (see also the description of
- NEVPROP below)
-
- 9. DartNet
- ++++++++++
-
- DartNet is a Macintosh-based backpropagation simulator,
- developed at Dartmouth by Jamshed Bharucha and Sean
- Nolan as a pedagogical tool. It makes use of the Mac's
- graphical interface, and provides a number of tools for
- building, editing, training, testing and examining networks.
- This program is available by anonymous ftp from
- dartvax.dartmouth.edu [129.170.16.4] as
- /pub/mac/dartnet.sit.hqx (124 KB).
-
- 10. SNNS
- ++++++++
-
- "Stuttgart Neural Network Simulator" from the University
- of Stuttgart, Germany. A luxurious simulator for many
- types of nets; with X11 interface: Graphical 2D and 3D
- topology editor/visualizer, training visualisation, multiple
- pattern set handling etc. Currently supports
- backpropagation (vanilla, online, with momentum term
- and flat spot elimination, batch, time delay),
- counterpropagation, quickprop, backpercolation 1,
- generalized radial basis functions (RBF), RProp, ART1,
- ART2, ARTMAP, Cascade Correlation, Recurrent
- Cascade Correlation, Dynamic LVQ, Backpropagation
- through time (for recurrent networks), batch
- backpropagation through time (for recurrent networks),
- Quickpropagation through time (for recurrent networks),
- Hopfield networks, Jordan and Elman networks,
- autoassociative memory, self-organizing maps, time-delay
- networks (TDNN), and is user-extendable (user-defined
- activation functions, output functions, site functions,
- learning procedures). Works on SunOS, Solaris, IRIX,
- Ultrix, AIX, HP/UX, and Linux. Available for ftp from
- ftp.informatik.uni-stuttgart.de [129.69.211.2] in directory
- /pub/SNNS as SNNSv3.2.tar.Z (2 MB, Source code) and
- SNNSv3.2.Manual.ps.Z (1.4 MB, Documentation). There
- are also various other files in this directory (e.g. the source
- version of the manual, a Sun Sparc executable, older
- versions of the software, some papers, and the software in
- several smaller parts). It may be best to first have a look at
- the file SNNSv3.2.Readme (10 kb). This file contains a
- somewhat more elaborate short description of the
- simulator.
-
- 11. Aspirin/MIGRAINES
- +++++++++++++++++++++
-
- Aspirin/MIGRAINES 6.0 consists of a code generator that
- builds neural network simulations by reading a network
- description (written in a language called "Aspirin") and
- generates a C simulation. An interface (called
- "MIGRAINES") is provided to export data from the neural
- network to visualization tools. The system has been ported
- to a large number of platforms. The goal of Aspirin is to
- provide a common extendible front-end language and
- parser for different network paradigms. The MIGRAINES
- interface is a terminal based interface that allows you to
- open Unix pipes to data in the neural network. Users can
- display the data using either public or commercial
- graphics/analysis tools. Example filters are included that
- convert data exported through MIGRAINES to formats
- readable by Gnuplot 3.0, Matlab, Mathematica, and xgobi.
- The software is available from two FTP sites: from CMU's
- simulator collection on pt.cs.cmu.edu [128.2.254.155] in
- /afs/cs/project/connect/code/am6.tar.Z and from UCLA's
- cognitive science machine ftp.cognet.ucla.edu [128.97.50.19]
- in /pub/alexis/am6.tar.Z (2 MB).
-
- 12. Adaptive Logic Network kit
- ++++++++++++++++++++++++++++++
-
- This package differs from the traditional nets in that it uses
- logic functions rather than floating point; for many tasks,
- ALN's can show many orders of magnitude gain in
- training and performance speed. Anonymous ftp from
- menaik.cs.ualberta.ca [129.128.4.241] in directory
- /pub/atree. See the files README (7 KB), atree2.tar.Z
- (145 kb, Unix source code and examples), atree2.ps.Z (76
- kb, documentation), a27exe.exe (412 kb, MS-Windows 3.x
- executable), atre27.exe (572 kb, MS-Windows 3.x source
- code).
-
- 13. NeuralShell
- +++++++++++++++
-
- Formerly available from FTP site
- quanta.eng.ohio-state.edu [128.146.35.1] as
- /pub/NeuralShell/NeuralShell.tar". Currently (April 94)
- not available and undergoing a major reconstruction. Not
- to be confused with NeuroShell by Ward System Group
- (see below under commercial software).
-
- 14. PDP
- +++++++
-
- The PDP simulator package is available via anonymous
- FTP at nic.funet.fi [128.214.6.100] as
- /pub/sci/neural/sims/pdp.tar.Z (202 kb). The simulator is
- also available with the book "Explorations in Parallel
- Distributed Processing: A Handbook of Models, Programs,
- and Exercises" by McClelland and Rumelhart. MIT Press,
- 1988. Comment: "This book is often referred to as PDP vol
- III which is a very misleading practice! The book comes
- with software on an IBM disk but includes a makefile for
- compiling on UNIX systems. The version of PDP available
- at ftp.funet.fi seems identical to the one with the book
- except for a bug in bp.c which occurs when you try to run a
- script of PDP commands using the DO command. This can
- be found and fixed easily."
-
- 15. Xerion
- ++++++++++
-
- Xerion runs on SGI and Sun machines and uses X
- Windows for graphics. The software contains modules that
- implement Back Propagation, Recurrent Back Propagation,
- Boltzmann Machine, Mean Field Theory, Free Energy
- Manipulation, Hard and Soft Competitive Learning, and
- Kohonen Networks. Sample networks built for each of the
- modules are also included. Contact: xerion@ai.toronto.edu.
- Xerion is available via anonymous ftp from
- ftp.cs.toronto.edu [128.100.1.105] in directory /pub/xerion as
- xerion-3.1.ps.Z (153 kB) and xerion-3.1.tar.Z (1.3 MB)
- plus several concrete simulators built with xerion (about 40
- kB each).
-
- 16. Neocognitron simulator
- ++++++++++++++++++++++++++
-
- The simulator is written in C and comes with a list of
- references which are necessary to read to understand the
- specifics of the implementation. The unsupervised version
- is coded without (!) C-cell inhibition. Available for
- anonymous ftp from unix.hensa.ac.uk [129.12.21.7] in
- /pub/neocognitron.tar.Z (130 kB).
-
- 17. Multi-Module Neural Computing Environment
- +++++++++++++++++++++++++++++++++++++++++++++
- (MUME)
- ++++++
-
- MUME is a simulation environment for multi-modules
- neural computing. It provides an object oriented facility for
- the simulation and training of multiple nets with various
- architectures and learning algorithms. MUME includes a
- library of network architectures including feedforward,
- simple recurrent, and continuously running recurrent
- neural networks. Each architecture is supported by a
- variety of learning algorithms. MUME can be used for
- large scale neural network simulations as it provides
- support for learning in multi-net environments. It also
- provide pre- and post-processing facilities. The modules
- are provided in a library. Several "front-ends" or clients
- are also available. X-Window support by
- editor/visualization tool Xmume. MUME can be used to
- include non-neural computing modules (decision trees, ...)
- in applications. MUME is available anonymous ftp on
- mickey.sedal.su.oz.au [129.78.24.170] after signing and
- sending a licence: /pub/license.ps (67 kb). Contact: Marwan
- Jabri, SEDAL, Sydney University Electrical Engineering,
- NSW 2006 Australia, marwan@sedal.su.oz.au
-
- 18. LVQ_PAK, SOM_PAK
- ++++++++++++++++++++
-
- These are packages for Learning Vector Quantization and
- Self-Organizing Maps, respectively. They have been built
- by the LVQ/SOM Programming Team of the Helsinki
- University of Technology, Laboratory of Computer and
- Information Science, Rakentajanaukio 2 C, SF-02150
- Espoo, FINLAND There are versions for Unix and
- MS-DOS available from cochlea.hut.fi [130.233.168.48] as
- /pub/lvq_pak/lvq_pak-2.1.tar.Z (340 kB, Unix sources),
- /pub/lvq_pak/lvq_p2r1.exe (310 kB, MS-DOS self-extract
- archive), /pub/som_pak/som_pak-1.2.tar.Z (251 kB, Unix
- sources), /pub/som_pak/som_p1r2.exe (215 kB, MS-DOS
- self-extract archive). (further programs to be used with
- SOM_PAK and LVQ_PAK can be found in /pub/utils).
-
- 19. SESAME
- ++++++++++
-
- ("Software Environment for the Simulation of Adaptive
- Modular Systems") SESAME is a prototypical software
- implementation which facilitates
- o Object-oriented building blocks approach.
- o Contains a large set of C++ classes useful for neural
- nets, neurocontrol and pattern recognition. No C++
- classes can be used as stand alone, though!
- o C++ classes include CartPole, nondynamic
- two-robot arms, Lunar Lander, Backpropagation,
- Feature Maps, Radial Basis Functions,
- TimeWindows, Fuzzy Set Coding, Potential Fields,
- Pandemonium, and diverse utility building blocks.
- o A kernel which is the framework for the C++
- classes and allows run-time manipulation,
- construction, and integration of arbitrary complex
- and hybrid experiments.
- o Currently no graphic interface for construction, only
- for visualization.
- o Platform is SUN4, XWindows
- Unfortunately no reasonable good introduction has been
- written until now. We hope to have something soon. For
- now we provide papers (eg. NIPS-92), a reference manual
- (>220 pages), source code (ca. 35.000 lines of code), and a
- SUN4-executable by ftp only. Sesame and its description is
- available in various files for anonymous ftp on ftp
- ftp.gmd.de in the directories /gmd/as/sesame and
- /gmd/as/paper. Questions to sesame-request@gmd.de; there
- is only very limited support available.
-
- 20. Nevada Backpropagation (NevProp)
- ++++++++++++++++++++++++++++++++++++
-
- NevProp is a free, easy-to-use feedforward
- backpropagation (multilayer perceptron) program. It uses
- an interactive character-based interface, and is distributed
- as C source code that should compile and run on most
- platforms. (Precompiled executables are available for
- Macintosh and DOS.) The original version was Quickprop
- 1.0 by Scott Fahlman, as translated from Common Lisp by
- Terry Regier. We added early-stopped training based on a
- held-out subset of data, c index (ROC curve area)
- calculation, the ability to force gradient descent (per-epoch
- or per-pattern), and additional options. FEATURES
- (NevProp version 1.16): UNLIMITED (except by machine
- memory) number of input PATTERNS; UNLIMITED
- number of input, hidden, and output UNITS; Arbitrary
- CONNECTIONS among the various layers' units;
- Clock-time or user-specified RANDOM SEED for initial
- random weights; Choice of regular GRADIENT
- DESCENT or QUICKPROP; Choice of PER-EPOCH or
- PER-PATTERN (stochastic) weight updating;
- GENERALIZATION to a test dataset;
- AUTOMATICALLY STOPPED TRAINING based on
- generalization; RETENTION of best-generalizing weights
- and predictions; Simple but useful GRAPHIC display to
- show smoothness of generalization; SAVING of results to
- a file while working interactively; SAVING of weights file
- and reloading for continued training; PREDICTION-only
- on datasets by applying an existing weights file; In addition
- to RMS error, the concordance, or c index is displayed. The
- c index (area under the ROC curve) shows the correctness
- of the RELATIVE ordering of predictions AMONG the
- cases; ie, it is a measure of discriminative power of the
- model. AVAILABILITY: The most updated version of
- NevProp will be made available by anonymous ftp from the
- University of Nevada, Reno: On ftp.scs.unr.edu
- [134.197.10.130] in the directory
- "pub/goodman/nevpropdir", e.g. README.FIRST (45 kb)
- or nevprop1.16.shar (138 kb). VERSION 2 to be released in
- Spring of 1994 -- some of the new features: more flexible
- file formatting (including access to external data files;
- option to prerandomize data order; randomized stochastic
- gradient descent; option to rescale predictor (input)
- variables); linear output units as an alternative to
- sigmoidal units for use with continuous-valued dependent
- variables (output targets); cross-entropy (maximum
- likelihood) criterion function as an alternative to square
- error for use with categorical dependent variables
- (classification/symbolic/nominal targets); and interactive
- interrupt to change settings on-the-fly. Limited support is
- available from Phil Goodman (goodman@unr.edu),
- University of Nevada Center for Biomedical Research.
-
- 21. Fuzzy ARTmap
- ++++++++++++++++
-
- This is just a small example program. Available for
- anonymous ftp from park.bu.edu [128.176.121.56]
- /pub/fuzzy-artmap.tar.Z (44 kB).
-
- 22. PYGMALION
- +++++++++++++
-
- This is a prototype that stems from an ESPRIT project. It
- implements back-propagation, self organising map, and
- Hopfield nets. Avaliable for ftp from ftp.funet.fi
- [128.214.248.6] as /pub/sci/neural/sims/pygmalion.tar.Z
- (1534 kb). (Original site is imag.imag.fr:
- archive/pygmalion/pygmalion.tar.Z).
-
- 23. Basis-of-AI-backprop
- ++++++++++++++++++++++++
-
- Earlier versions have been posted in comp.sources.misc and
- people around the world have used them and liked them.
- This package is free for ordinary users but shareware for
- businesses and government agencies ($200/copy, but then
- for this you get the professional version as well). I do
- support this package via email. Some of the highlights are:
- o in C for UNIX and DOS and DOS binaries
- o gradient descent, delta-bar-delta and quickprop
- o extra fast 16-bit fixed point weight version as well
- as a conventional floating point version
- o recurrent networks
- o numerous sample problems
- Available for ftp from ftp.mcs.com in directory
- /mcsnet.users/drt. Or see the WWW page
- http://www.mcs.com/~drt/home.html. The expanded
- professional version is $30/copy for ordinary individuals
- including academics and $200/copy for businesses and
- government agencies (improved user interface, more
- activation functions, networks can be read into your own
- programs, dynamic node creation, weight decay,
- SuperSAB). More details can be found in the
- documentation for the student version. Contact: Don
- Tveter; 5228 N. Nashville Ave.; Chicago, Illinois 60656;
- drt@mcs.com
-
- 24. Matrix Backpropagation
- ++++++++++++++++++++++++++
-
- MBP (Matrix Back Propagation) is a very efficient
- implementation of the back-propagation algorithm for
- current-generation workstations. The algorithm includes a
- per-epoch adaptive technique for gradient descent. All the
- computations are done through matrix multiplications and
- make use of highly optimized C code. The goal is to reach
- almost peak-performances on RISCs with superscalar
- capabilities and fast caches. On some machines (and with
- large networks) a 30-40x speed-up can be measured with
- respect to conventional implementations. The software is
- available by anonymous ftp from risc6000.dibe.unige.it
- [130.251.89.154] as /pub/MBPv1.1.tar.Z (Unix version),
- /pub/MBPv11.zip.Z (MS-DOS version), /pub/mpbv11.ps
- (Documentation). For more information, contact Davide
- Anguita (anguita@dibe.unige.it).
-
- 25. WinNN
- +++++++++
-
- WinNN is a shareware Neural Networks (NN) package for
- windows 3.1. WinNN incorporates a very user friendly
- interface with a powerful computational engine. WinNN is
- intended to be used as a tool for beginners and more
- advanced neural networks users, it provides an alternative
- to using more expensive and hard to use packages. WinNN
- can implement feed forward multi-layered NN and uses a
- modified fast back-propagation for training. Extensive on
- line help. Has various neuron functions. Allows on the fly
- testing of the network performance and generalization. All
- training parameters can be easily modified while WinNN is
- training. Results can be saved on disk or copied to the
- clipboard. Supports plotting of the outputs and weight
- distribution. Available for ftp from winftp.cica.indiana.edu
- as /pub/pc/win3/programr/winnn093.zip (545 kB).
-
- 26. BIOSIM
- ++++++++++
-
- BIOSIM is a biologically oriented neural network
- simulator. Public domain, runs on Unix (less powerful
- PC-version is available, too), easy to install, bilingual
- (german and english), has a GUI (Graphical User
- Interface), designed for research and teaching, provides
- online help facilities, offers controlling interfaces, batch
- version is available, a DEMO is provided.
- REQUIREMENTS (Unix version): X11 Rel. 3 and above,
- Motif Rel 1.0 and above, 12 MB of physical memory,
- recommended are 24 MB and more, 20 MB disc space.
- REQUIREMENTS (PC version): PC-compatible with MS
- Windows 3.0 and above, 4 MB of physical memory,
- recommended are 8 MB and more, 1 MB disc space. Four
- neuron models are implemented in BIOSIM: a simple
- model only switching ion channels on and off, the original
- Hodgkin-Huxley model, the SWIM model (a modified HH
- model) and the Golowasch-Buchholz model. Dendrites
- consist of a chain of segments without bifurcation. A
- neural network can be created by using the interactive
- network editor which is part of BIOSIM. Parameters can
- be changed via context sensitive menus and the results of
- the simulation can be visualized in observation windows
- for neurons and synapses. Stochastic processes such as
- noise can be included. In addition, biologically orientied
- learning and forgetting processes are modeled, e.g.
- sensitization, habituation, conditioning, hebbian learning
- and competitive learning. Three synaptic types are
- predefined (an excitatatory synapse type, an inhibitory
- synapse type and an electrical synapse). Additional synaptic
- types can be created interactively as desired. Available for
- ftp from ftp.uni-kl.de in directory /pub/bio/neurobio: Get
- /pub/bio/neurobio/biosim.readme (2 kb) and
- /pub/bio/neurobio/biosim.tar.Z (2.6 MB) for the Unix
- version or /pub/bio/neurobio/biosimpc.readme (2 kb) and
- /pub/bio/neurobio/biosimpc.zip (150 kb) for the PC version.
- Contact: Stefan Bergdoll; Department of Software
- Engineering (ZXA/US); BASF Inc.; D-67056
- Ludwigshafen; Germany; bergdoll@zxa.basf-ag.de; phone
- 0621-60-21372; fax 0621-60-43735
-
- 27. The Brain
- +++++++++++++
-
- The Brain is an advanced neural network simulator for
- PCs that is simple enough to be used by non-technical
- people, yet sophisticated enough for serious research work.
- It is based upon the backpropagation learning algorithm.
- Three sample networks are included. The documentation
- included provides you with an introduction and overview of
- the concepts and applications of neural networks as well as
- outlining the features and capabilities of The Brain. The
- Brain requires 512K memory and MS-DOS or PC-DOS
- version 3.20 or later (versions for other OS's and machines
- are available). A 386 (with maths coprocessor) or higher is
- recommended for serious use of The Brain. Shareware
- payment required. Demo version is restricted to number of
- units the network can handle due to memory contraints on
- PC's. Registered version allows use of extra memory.
- External documentation included: 39Kb, 20 Pages. Source
- included: No (Source comes with registration). Available
- via anonymous ftp from ftp.tu-clausthal.de as
- /pub/msdos/science/brain12.zip (78 kb) and from
- ftp.technion.ac.il as /pub/contrib/dos/brain12.zip (78 kb)
- Contact: David Perkovic; DP Computing; PO Box 712;
- Noarlunga Center SA 5168; Australia; Email:
- dip@mod.dsto.gov.au (preferred) or dpc@mep.com or
- perkovic@cleese.apana.org.au
-
- 28. FuNeGen 1.0
- +++++++++++++++
-
- FuNeGen is a MLP based software program to generate
- fuzzy rule based classifiers. A limited version (maximum of
- 7 inputs and 3 membership functions for each input) for
- PCs is available for anonymous ftp from
- obelix.microelectronic.e-technik.th-darmstadt.de in
- directory /pub/neurofuzzy. For further information see the
- file read.me. Contact: Saman K. Halgamuge
-
- 29. NeuDL -- Neural-Network Description Language
- ++++++++++++++++++++++++++++++++++++++++++++++++
-
- NeuDL is a description language for the design, training,
- and operation of neural networks. It is currently limited to
- the backpropagation neural-network model; however, it
- offers a great deal of flexibility. For example, the user can
- explicitly specify the connections between nodes and can
- create or destroy connections dynamically as training
- progresses. NeuDL is an interpreted language resembling C
- or C++. It also has instructions dealing with
- training/testing set manipulation as well as neural network
- operation. A NeuDL program can be run in interpreted
- mode or it can be automatically translated into C++ which
- can be compiled and then executed. The NeuDL interpreter
- is written in C++ and can be easly extended with new
- instructions. NeuDL is available from the anonymous ftp
- site at The University of Alabama: cs.ua.edu (130.160.44.1)
- in the file /pub/neudl/NeuDLver021.tar. The tarred file
- contains the interpreter source code (in C++) a user
- manual, a paper about NeuDL, and about 25 sample
- NeuDL programs. A document demonstrating NeuDL's
- capabilities is also available from the ftp site:
- /pub/neudl/NeuDL/demo.doc /pub/neudl/demo.doc. For
- more information contact the author: Joey Rogers
- (jrogers@buster.eng.ua.edu).
-
- 30. NeoC Explorer (Pattern Maker included)
- ++++++++++++++++++++++++++++++++++++++++++
-
- The NeoC software is an implementation of Fukushima's
- Neocognitron neural network. Its purpose is to test the
- model and to facilitate interactivity for the experiments.
- Some substantial features: GUI, explorer and tester
- operation modes, recognition statistics, performance
- analysis, elements displaying, easy net construction. PLUS,
- a pattern maker utility for testing ANN: GUI, text file
- output, transformations. Available for anonymous FTP
- from OAK.Oakland.Edu (141.210.10.117) as
- /SimTel/msdos/neurlnet/neocog10.zip (193 kB, DOS
- version)
-
- 31. AINET
- +++++++++
-
- aiNet is a shareware Neural Networks (NN) application
- for MS-Windows 3.1. It does not require learning, has no
- limits in parameters (input & output neurons), no limits in
- sample size. It is not sensitive toward noise in the data.
- Database can be changed dynamically. It provides a way to
- estimate the rate of error in your prediction. Missing values
- are handled automatically. It has graphical
- spreadsheet-like user interface and on-line help system. It
- provides also several different charts types. aiNet manual
- (90 pages) is divided into: "User's Guide", "Basics About
- Modeling with the AINET", "Examples". Special
- requirements: Windows 3.1, VGA or better. Can be
- downloaded from
- ftp://ftp.cica.indiana.edu/pub/pc/win3/programr/ainet100.zip
- or from
- ftp://oak.oakland.edu/SimTel/win3/math/ainet100.zip
-
- For some of these simulators there are user mailing lists. Get the
- packages and look into their documentation for further info.
-
- If you are using a small computer (PC, Mac, etc.) you may want
- to have a look at the Central Neural System Electronic Bulletin
- Board (see answer 13). Modem: 409-737-5312; Sysop: Wesley R.
- Elsberry; 4160 Pirates' Beach, Galveston, TX, USA;
- welsberr@orca.tamu.edu. There are lots of small simulator
- packages, the CNS ANNSIM file set. There is an ftp mirror site
- for the CNS ANNSIM file set at me.uta.edu [129.107.2.20] in the
- /pub/neural directory. Most ANN offerings are in
- /pub/neural/annsim.
-
- ------------------------------------------------------------------------
-
- 18. A: Commercial software packages for NN
- ==========================================
- simulation?
- ===========
-
- 1. nn/xnn
- +++++++++
-
- Name: nn/xnn
- Company: Neureka ANS
- Address: Klaus Hansens vei 31B
- 5037 Solheimsviken
- NORWAY
- Phone: +47-55544163 / +47-55201548
- Email: arnemo@eik.ii.uib.no
- Basic capabilities:
- Neural network development tool. nn is a language for specification of
- neural network simulators. Produces C-code and executables for the
- specified models, therefore ideal for application development. xnn is
- a graphical front-end to nn and the simulation code produced by nn.
- Gives graphical representations in a number of formats of any
- variables during simulation run-time. Comes with a number of
- pre-implemented models, including: Backprop (several variants), Self
- Organizing Maps, LVQ1, LVQ2, Radial Basis Function Networks,
- Generalized Regression Neural Networks, Jordan nets, Elman nets,
- Hopfield, etc.
- Operating system: nn: UNIX or MS-DOS, xnn: UNIX/X-windows
- System requirements: 10 Mb HD, 2 Mb RAM
- Approx. price: USD 2000,-
-
- 2. BrainMaker
- +++++++++++++
-
- Name: BrainMaker, BrainMaker Pro
- Company: California Scientific Software
- Address: 10024 Newtown rd, Nevada City, CA, 95959 USA
- Phone,Fax: 916 478 9040, 916 478 9041
- Email: calsci!mittmann@gvgpsa.gvg.tek.com (flakey connection)
- Basic capabilities: train backprop neural nets
- Operating system: DOS, Windows, Mac
- System requirements:
- Uses XMS or EMS for large models(PCs only): Pro version
- Approx. price: $195, $795
-
- BrainMaker Pro 3.0 (DOS/Windows) $795
- Gennetic Training add-on $250
- ainMaker 3.0 (DOS/Windows/Mac) $195
- Network Toolkit add-on $150
- BrainMaker 2.5 Student version (quantity sales only, about $38 each)
-
- BrainMaker Pro C30 Accelerator Board
- w/ 5Mb memory $9750
- w/32Mb memory $13,000
-
- Intel iNNTS NN Development System $11,800
- Intel EMB Multi-Chip Board $9750
- Intel 80170 chip set $940
-
- Introduction To Neural Networks book $30
-
- California Scientific Software can be reached at:
- Phone: 916 478 9040 Fax: 916 478 9041 Tech Support: 916 478 9035
- Mail: 10024 newtown rd, Nevada City, CA, 95959, USA
- 30 day money back guarantee, and unlimited free technical support.
- BrainMaker package includes:
- The book Introduction to Neural Networks
- BrainMaker Users Guide and reference manual
- 300 pages , fully indexed, with tutorials, and sample networks
- Netmaker
- Netmaker makes building and training Neural Networks easy, by
- importing and automatically creating BrainMaker's Neural Network
- files. Netmaker imports Lotus, Excel, dBase, and ASCII files.
- BrainMaker
- Full menu and dialog box interface, runs Backprop at 750,000 cps
- on a 33Mhz 486.
- ---Features ("P" means is avaliable in professional version only):
- Pull-down Menus, Dialog Boxes, Programmable Output Files,
- Editing in BrainMaker, Network Progress Display (P),
- Fact Annotation, supports many printers, NetPlotter,
- Graphics Built In (P), Dynamic Data Exchange (P),
- Binary Data Mode, Batch Use Mode (P), EMS and XMS Memory (P),
- Save Network Periodically, Fastest Algorithms,
- 512 Neurons per Layer (P: 32,000), up to 8 layers,
- Specify Parameters by Layer (P), Recurrence Networks (P),
- Prune Connections and Neurons (P), Add Hidden Neurons In Training,
- Custom Neuron Functions, Testing While Training,
- Stop training when...-function (P), Heavy Weights (P),
- Hypersonic Training, Sensitivity Analysis (P), Neuron Sensitivity (P),
- Global Network Analysis (P), Contour Analysis (P),
- Data Correlator (P), Error Statistics Report,
- Print or Edit Weight Matrices, Competitor (P), Run Time System (P),
- Chip Support for Intel, American Neurologics, Micro Devices,
- Genetic Training Option (P), NetMaker, NetChecker,
- Shuffle, Data Import from Lotus, dBASE, Excel, ASCII, binary,
- Finacial Data (P), Data Manipulation, Cyclic Analysis (P),
- User's Guide quick start booklet,
- Introduction to Neural Networks 324 pp book
-
- 3. SAS Software/ Neural Net add-on
- ++++++++++++++++++++++++++++++++++
-
- Name: SAS Software
- Company: SAS Institute, Inc.
- Address: SAS Campus Drive, Cary, NC 27513, USA
- Phone,Fax: (919) 677-8000
- Email: saswss@unx.sas.com (Neural net inquiries only)
-
- Basic capabilities:
- Feedforward nets with numerous training methods
- and loss functions, plus statistical analogs of
- counterpropagation and various unsupervised
- architectures
- Operating system: Lots
- System requirements: Lots
- Uses XMS or EMS for large models(PCs only): Runs under Windows, OS/2
- Approx. price: Free neural net software, but you have to license
- SAS/Base software and preferably the SAS/OR, SAS/ETS,
- and/or SAS/STAT products.
- Comments: Oriented toward data analysis and statistical applications
-
- 4. NeuralWorks
- ++++++++++++++
-
- Name: NeuralWorks Professional II Plus (from NeuralWare)
- Company: NeuralWare Inc.
- Adress: Pittsburgh, PA 15276-9910
- Phone: (412) 787-8222
- FAX: (412) 787-8220
-
- Distributor for Europe:
- Scientific Computers GmbH.
- Franzstr. 107, 52064 Aachen
- Germany
- Tel. (49) +241-26041
- Fax. (49) +241-44983
- Email. info@scientific.de
-
- Basic capabilities:
- supports over 30 different nets: backprop, art-1,kohonen,
- modular neural network, General regression, Fuzzy art-map,
- probabilistic nets, self-organizing map, lvq, boltmann,
- bsb, spr, etc...
- Extendable with optional package.
- ExplainNet, Flashcode (compiles net in .c code for runtime),
- user-defined io in c possible. ExplainNet (to eliminate
- extra inputs), pruning, savebest,graph.instruments like
- correlation, hinton diagrams, rms error graphs etc..
- Operating system : PC,Sun,IBM RS6000,Apple Macintosh,SGI,Dec,HP.
- System requirements: varies. PC:2MB extended memory+6MB Harddisk space.
- Uses windows compatible memory driver (extended).
- Uses extended memory.
- Approx. price : call (depends on platform)
- Comments : award winning documentation, one of the market
- leaders in NN software.
-
- 5. MATLAB Neural Network Toolbox (for use with Matlab
- +++++++++++++++++++++++++++++++++++++++++++++++++++++
- 4.x)
- ++++
-
- Contact: The MathWorks, Inc. Phone: 508-653-1415
- 24 Prime Park Way FAX: 508-653-2997
- Natick, MA 01760 email: info@mathworks.com
-
- The Neural Network Toolbox is a powerful collection of
- MATLAB functions for the design, training, and
- simulation of neural networks. It supports a wide range of
- network architectures with an unlimited number of
- processing elements and interconnections (up to operating
- system constraints). Supported architectures and training
- methods include: supervised training of feedforward
- networks using the perceptron learning rule, Widrow-Hoff
- rule, several variations on backpropagation (including the
- fast Levenberg-Marquardt algorithm), and radial basis
- networks; supervised training of recurrent Elman
- networks; unsupervised training of associative networks
- including competitive and feature map layers; Kohonen
- networks, self-organizing maps, and learning vector
- quantization. The Neural Network Toolbox contains a
- textbook-quality Users' Guide, uses tutorials, reference
- materials and sample applications with code examples to
- explain the design and use of each network architecture
- and paradigm. The Toolbox is delivered as MATLAB
- M-files, enabling users to see the algorithms and
- implementations, as well as to make changes or create new
- functions to address a specific application.
-
- (Comment by Richard Andrew Miles Outerbridge,
- RAMO@UVPHYS.PHYS.UVIC.CA:) Matlab is spreading
- like hotcakes (and the educational discounts are very
- impressive). The newest release of Matlab (4.0) ansrwers
- the question "if you could only program in one language
- what would it be?". The neural network toolkit is worth
- getting for the manual alone. Matlab is available with lots
- of other toolkits (signal processing, optimization, etc.) but I
- don't use them much - the main package is more than
- enough. The nice thing about the Matlab approach is that
- you can easily interface the neural network stuff with
- anything else you are doing.
-
- 6. Propagator
- +++++++++++++
-
- Contact: ARD Corporation,
- 9151 Rumsey Road, Columbia, MD 21045, USA
- propagator@ard.com
- Easy to use neural network training package. A GUI implementation of
- backpropagation networks with five layers (32,000 nodes per layer).
- Features dynamic performance graphs, training with a validation set,
- and C/C++ source code generation.
- For Sun (Solaris 1.x & 2.x, $499),
- PC (Windows 3.x, $199)
- Mac (System 7.x, $199)
- Floating point coprocessor required, Educational Discount,
- Money Back Guarantee, Muliti User Discount
- Windows Demo on:
- nic.funet.fi /pub/msdos/windows/demo
- oak.oakland.edu /pub/msdos/neural_nets
- gatordem.zip pkzip 2.04g archive file
- gatordem.txt readme text file
-
- 7. NeuroForecaster
- ++++++++++++++++++
-
- Name: NeuroForecaster(TM)/Genetica 3.1
- Contact: Accel Infotech (S) Pte Ltd; 648 Geylang Road;
- Republic of Singapore 1438; Phone: +65-7446863; Fax: +65-7492467
- accel@solomon.technet.sg
- For IBM PC 386/486 with mouse, or compatibles MS Windows* 3.1,
- MS DOS 5.0 or above 4 MB RAM, 5 MB available harddisk space min;
- 3.5 inch floppy drive, VGA monitor or above, Math coprocessor recommended.
- Neuroforecaster 3.1 for Windows is priced at US$1199 per single user
- license. Please email us (accel@solomon.technet.sg) for order form.
- More information about NeuroForecaster(TM)/Genetical may be found in
- ftp://ftp.technet.sg/Technet/user/accel/nfga40.exe
- NeuroForecaster is a user-friendly neural network program specifically
- designed for building sophisticated and powerful forecasting and
- decision-support systems (Time-Series Forecasting, Cross-Sectional
- Classification, Indicator Analysis)
- Features:
- * GENETICA Net Builder Option for automatic network optimization
- * 12 Neuro-Fuzzy Network Models
- * Multitasking & Background Training Mode
- * Unlimited Network Capacity
- * Rescaled Range Analysis & Hurst Exponent to Unveil Hidden Market
- Cycles & Check for Predictability
- * Correlation Analysis to Compute Correlation Factors to Analyze the
- Significance of Indicators
- * Weight Histogram to Monitor the Progress of Learning
- * Accumulated Error Analysis to Analyze the Strength of Input Indicators
- Its user-friendly interface allows the users to build applications quickly,
- easily and interactively, analyze the data visually and see the results
- immediately.
- The following example applications are included in the package:
- * Credit Rating - for generating the credit rating of bank loan
- applications.
- * Stock market 6 monthly returns forecast
- * Stock selection based on company ratios
- * US$ to Deutschmark exchange rate forecast
- * US$ to Yen exchange rate forecast
- * US$ to SGD exchange rate forecast
- * Property price valuation
- * XOR - a classical problem to show the results are better than others
- * Chaos - Prediction of Mackey-Glass chaotic time series
- * SineWave - For demonstrating the power of Rescaled Range Analysis and
- significance of window size
- Techniques Implemented:
- * GENETICA Net Builder Option - network creation & optimization based on
- Darwinian evolution theory
- * Backprop Neural Networks - the most widely-used training algorithm
- * Fastprop Neural Networks - speeds up training of large problems
- * Radial Basis Function Networks - best for pattern classification problems
- * Neuro-Fuzzy Network
- * Rescaled Range Analysis - computes Hurst exponents to unveil hidden
- cycles & check for predictability
- * Correlation Analysis - to identify significant input indicators
-
- 8. Products of NESTOR, Inc.
- +++++++++++++++++++++++++++
-
- 530 Fifth Avenue; New York, NY 10036; USA; Tel.:
- 001-212-398-7955
-
- Founders: Dr. Leon Cooper (having a Nobel Price) and Dr.
- Charles Elbaum (Brown University). Neural Network
- Models: Adaptive shape and pattern recognition (Restricted
- Coulomb Energy - RCE) developed by NESTOR is one of
- the most powerfull Neural Network Model used in a later
- products. The basis for NESTOR products is the Nestor
- Learning System - NLS. Later are developed: Character
- Learning System - CLS and Image Learning System -
- ILS. Nestor Development System - NDS is a development
- tool in Standard C - one of the most powerfull PC-Tools
- for simulation and development of Neural Networks. NLS
- is a multi-layer, feed forward system with low connectivity
- within each layer and no relaxation procedure used for
- determining an output response. This unique architecture
- allows the NLS to operate in real time without the need for
- special computers or custom hardware. NLS is composed of
- multiple neural networks, each specializing in a subset of
- information about the input patterns. The NLS integrates
- the responses of its several parallel networks to produce a
- system response that is far superior to that of other neural
- networks. Minimized connectivity within each layer results
- in rapid training and efficient memory utilization- ideal for
- current VLSI technology. Intel has made such a chip -
- NE1000.
-
- 9. NeuroShell2/NeuroWindows
- +++++++++++++++++++++++++++
-
- NeuroShell 2 combines powerful neural network
- architectures, a Windows icon driven user interface, and
- sophisticated utilities for MS-Windows machines. Internal
- format is spreadsheet, and users can specify that
- NeuroShell 2 use their own spreadsheet when editing.
- Includes both Beginner's and Advanced systems, a
- Runtime capability, and a choice of 15 Backpropagation,
- Kohonen, PNN and GRNN architectures. Includes Rules,
- Symbol Translate, Graphics, File Import/Export modules
- (including MetaStock from Equis International) and
- NET-PERFECT to prevent overtraining. Options
- available: Market Technical Indicator Option ($295),
- Market Technical Indicator Option with Optimizer ($590),
- and Race Handicapping Option ($149). NeuroShell price:
- $495.
-
- NeuroWindows is a programmer's tool in a Dynamic Link
- Library (DLL) that can create as many as 128 interactive
- nets in an application, each with 32 slabs in a single
- network, and 32K neurons in a slab. Includes
- Backpropagation, Kohonen, PNN, and GRNN paradigms.
- NeuroWindows can mix supervised and unsupervised nets.
- The DLL may be called from Visual Basic, Visual C,
- Access Basic, C, Pascal, and VBA/Excel 5. NeuroWindows
- price: $369.
-
- Contact: Ward Systems Group, Inc.; Executive Park West;
- 5 Hillcrest Drive; Frederick, MD 21702; USA; Phone: 301
- 662-7950; FAX: 301 662-5666. Contact us for a free demo
- diskette and Consumer's Guide to Neural Networks.
-
- 10. NuTank
- ++++++++++
-
- NuTank stands for NeuralTank. It is educational and
- entertainment software. In this program one is given the
- shell of a 2 dimentional robotic tank. The tank has various
- I/O devices like wheels, whiskers, optical sensors, smell, fuel
- level, sound and such. These I/O sensors are connected to
- Neurons. The player/designer uses more Neurons to
- interconnect the I/O devices. One can have any level of
- complexity desired (memory limited) and do subsumptive
- designs. More complex design take slightly more fuel, so life
- is not free. All movement costs fuel too. One can also tag
- neuron connections as "adaptable" that adapt their weights
- in acordance with the target neuron. This allows neurons
- to learn. The Neuron editor can handle 3 dimention arrays
- of neurons as single entities with very flexible interconect
- patterns.
-
- One can then design a scenario with walls, rocks, lights, fat
- (fuel) sources (that can be smelled) and many other such
- things. Robot tanks are then introduced into the Scenario
- and allowed interact or battle it out. The last one alive
- wins, or maybe one just watches the motion of the robots
- for fun. While the scenario is running it can be stopped,
- edited, zoom'd, and can track on any robot.
-
- The entire program is mouse and graphicly based. It uses
- DOS and VGA and is written in TurboC++. There will
- also be the ability to download designs to another computer
- and source code will be available for the core neural
- simulator. This will allow one to design neural systems and
- download them to real robots. The design tools can handle
- three dimentional networks so will work with video camera
- inputs and such. Eventualy I expect to do a port to UNIX
- and multi thread the sign. I also expect to do a Mac port
- and maybe NT or OS/2
-
- Copies of NuTank cost $50 each. Contact: Richard Keene;
- Keene Educational Software;
- Dick.Keene@Central.Sun.COM
-
- NuTank shareware with the Save options disabled is
- available via anonymous ftp from the Internet, see the file
- /pub/incoming/nutank.readme on the host
- cher.media.mit.edu.
-
- 11. Neuralyst
- +++++++++++++
-
- Name: Neuralyst Version 1.4; Company: Cheshire
- Engineering Corporation; Address: 650 Sierra Madre Villa,
- Suite 201, Pasedena CA 91107; Phone: 818-351-0209;
- Fax: 818-351-8645;
-
- Basic capabilities: training of backpropogation neural nets.
- Operating system: Windows or Macintosh running
- Microsoft Excel Spreadsheet. Neuralyst is an add-in
- package for Excel. Approx. price: $195 for windows or
- Mac. Comments: A simple model that is easy to use.
- Integrates nicely into Microsoft Excel. Allows user to
- create, train, and run backprop ANN models entirely
- within an Excel spreadsheet. Provides macro functions that
- can be called from Excel macro's, allowing you to build a
- custom Window's interface using Excel's macro language
- and Visual Basic tools. The new version 1.4 includes a
- genetic algorithm to guide the training process. A good
- bargain to boot. (Comments by Duane Highley, a user and
- NOT the program developer.
- dhighley@ozarks.sgcl.lib.mo.us)
-
- 12. NeuFuz4
- +++++++++++
-
- Name: NeuFuz4 Company: National Semiconductor
- Corporation Address: 2900 Semiconductor Drive, Santa
- Clara, CA, 95052, or: Industriestrasse 10, D-8080
- Fuerstenfeldbruck, Germany, or: Sumitomo Chemical
- Engineering Center, Bldg. 7F 1-7-1, Nakase, Mihama-Ku,
- Chiba-City, Ciba Prefecture 261, JAPAN, or: 15th Floor,
- Straight Block, Ocean Centre, 5 Canton Road, Tsim Sha
- Tsui East, Kowloon, Hong Kong, Phone: (800) 272-9959
- (Americas), : 011-49-8141-103-0 Germany :
- 0l1-81-3-3299-7001 Japan : (852) 737-1600 Hong Kong
- Email: neufuz@esd.nsc.com (Neural net inquiries only)
- URL:
- http://www.commerce.net/directories/participants/ns/home.html
- Basic capabilities: Uses backpropagation techniques to
- initially select fuzzy rules and membership functions. The
- result is a fuzzy associative memory (FAM) which
- implements an approximation of the training data.
- Operating Systems: 486DX-25 or higher with math
- co-processor DOS 5.0 or higher with Windows 3.1, mouse,
- VGA or better, minimum 4 MB RAM, and parallel port.
- Approx. price : depends on version - see below. Comments
- : Not for the serious Neural Network researcher, but good
- for a person who has little understanding of Neural Nets -
- and wants to keep it that way. The systems are aimed at
- low end controls applications in automotive, industrial, and
- appliance areas. NeuFuz is a neural-fuzzy technology
- which uses backpropagation techniques to initially select
- fuzzy rules and membership functions. Initial stages of
- design using NeuFuz technology are performed using
- training data and backpropagation. The result is a fuzzy
- associative memory (FAM) which implements an
- approximation of the training data. By implementing a
- FAM, rather than a multi-layer perceptron, the designer
- has a solution which can be understood and tuned to a
- particular application using Fuzzy Logic design techniques.
- There are several different versions, some with COP8 Code
- Generator (COP8 is National's family of 8-bit
- microcontrollers) and COP8 in-circuit emulator (debug
- module).
-
- 13. Cortex-Pro
- ++++++++++++++
-
- Cortex-Pro information is on WWW at:
- http://www.neuronet.ph.kcl.ac.uk/neuronet/software/cortex/www1.html.
- You can download a working demo from there. Contact:
- Michael Reiss (
- http://www.mth.kcl.ac.uk/~mreiss/mick.html) email:
- <m.reiss@kcl.ac.uk>.
-
- 14. PARTEK
- ++++++++++
-
- PARTEK is a powerful, integrated environment for visual
- and quantitative data analysis and pattern recognition.
- Drawing from a wide variety of disciplines including
- Artificial Neural Networks, Fuzzy Logic, Genetic
- Algorithms, and Statistics, PARTEK integrates data
- analysis and modeling tools into an easy to use "point and
- click" system. The following modules are available from
- PARTEK; functions from different modules are integrated
- with each other whereever possible:
- 1. The PARTEK/AVB - The Analytical/Visual Base.
- (TM)
-
- * Analytical Spreadsheet (TM)
- The Analytical Spreadsheet is a powerful and easy to use data analysis,
- transformations, and visualization tool. Some features include:
- - import native format ascii/binary data
- - recognition and resolution of missing data
- - complete set of common mathematical & statistical functions
- - contingency table analysis / correspondence analysis
- - univariate histogram analysis
- - extensive set of smoothing and normalization transformations
- - easily and quickly plot color-coded 1-D curves and histograms,
- 2-D, 3-D, and N-D mapped scatterplots, highlighting selected
- patterns
- - Command Line (Tcl) and Graphical Interface
-
- * Pattern Visualization System (TM)
- The Pattern Visualization System offers the most powerful tools for
- visual analysis of the patterns in your data. Some features include:
- - automatically maps N-D data down to 3-D for visualization of
- *all* of your variables at once
- - hard copy color Postscript output
- - a variety of color-coding, highlighting, and labeling options
- allow you to generate meaningful graphics
-
- * Data Filters
- Filter out selected rows and/or columns of your data for flexible and
- efficient cross-validation, jackknifing, bootstrapping, feature set
- evaluation, and more.
-
- * Random # Generators
- Generate random numbers from any of the following parameterized
- distributions:
- - uniform, normal, exponential, gamma, binomial, poisson
-
- * Many distance/similarity metrics
- Choose the appropriate distance metric for your data:
- - euclidean, mahalanobis, minkowski, maximum value, absolute value,
- shape coefficient, cosine coefficient, pearson correlation,
- rank correlation, kendall's tau, canberra, and bray-curtis
-
- * Tcl/Tk command line interface
-
- 2. The PARTEK/DSA - Data Structure Analysis
- Module
-
- * Principal Components Analysis and Regression
- Also known as Eigenvector Projection or Karhunen-Loeve Expansions,
- PCA removes redundant information from your data.
- - component analysis, correlate PC's with original variables
- - choice of covariance, correlation, or product dispersion matrices
- - choice of eigenvector, y-score, and z-score projections
- - view SCREE and log-eigenvalue plots
-
- * Cluster Analysis
- Does the data form groups? How many? How compact? Cluster Analysis
- is the tool to answer these questions.
- - choose between several distance metrics
- - optionally weight individual patterns
- - manually or auto-select the cluster number and initial centers
- - dump cluster counts, mean, cluster to cluster distances,
- cluster variances, and cluster labeled data to a matrix viewer or
- the Analytical Spreadsheet for further analysis
- - visualize n-dimensional clustering
- - assess goodness of partion using several internal and external
- criteria metrics
-
- * N-Dimensional Histogram Analysis
- Among the most inportant questions a researcher needs to know when
- analyzing patterns is whether or not the patterns can distinguish
- different classes of data. N-D Histogram Analysis is one tool to
- answer this question.
- - measures histogram overlap in n-dimensional space
- - automatically find the best subset of features
- - rank the overlap of your best feature combinations
-
- * Non-Linear Mapping
- NLM is an iterative algorithm for visually analyzing the structure of
- n-dimensional data. NLM produces a non-linear mapping of data which
- preserves interpoint distances of n-dimensional data while reducing
- to a lower dimensionality - thus preserving the structure of the data.
- - visually analyze structure of n-dimensional data
- - track progress with error curves
- - orthogonal, PCA, and random initialization
-
- 3. The PARTEK/CP - Classification and Prediction
- Module.
-
- * Multi-Layer Perceptron
- The most popular among the neural pattern recognition tools is the MLP.
- PARTEK takes the MLP to a new dimension, by allowing the network to
- learn by adapting ALL of its parameters to solve a problem.
- - adapts output bias, neuron activation steepness, and neuron
- dynamic range, as well as weights and input biases
- - auto-scaling at input and output - no need to rescale your data
- - choose between sigmoid, gaussian, linear, or mixture of neurons
- - learning rate, momentum can be set independently for each parameter
- - variety of learning methods and network initializations
- - view color-coded network, error, etc as network trains, tests, runs
-
- * Learning Vector Quantization
- Because LVQ is a multiple prototype classifier, it adapts to identify
- multiple sub-groups within classes
- - LVQ1, LVQ2, and LVQ3 training methods
- - 3 different functions for adapting learning rate
- - choose between several distance metrics
- - fuzzy and crisp classifications
- - set number of prototypes individually for each class
-
- * Bayesian Classifier
- Bayes methods are the statistical decision theory approach to
- classification. This classifier uses statistical properties of your
- data to develop a classification model.
-
- PARTEK is available on HP, IBM, Silicon Graphics, and
- SUN workstations. For more information, send email to
- "info@partek.com" or call (314)926-2329.
-
- ------------------------------------------------------------------------
-
- 19. A: Neural Network hardware?
- ===============================
-
- [who will write some short comment on the most important
- HW-packages and chips?]
-
- The Number 1 of each volume of the journal "Neural Networks"
- has a list of some dozens of suppliers of Neural Network support:
- Software, Hardware, Support, Programming, Design and Service.
-
- Here is a short list of companies:
-
- 1. HNC, INC.
- ++++++++++++
-
- 5501 Oberlin Drive
- San Diego
- California 92121
- (619) 546-8877
- and a second address at
- 7799 Leesburg Pike, Suite 900
- Falls Church, Virginia
- 22043
- (703) 847-6808
- Note: Australian Dist.: Unitronics
- Tel : (09) 4701443
- Contact: Martin Keye
- HNC markets:
- 'Image Document Entry Processing Terminal' - it recognises
- handwritten documents and converts the info to ASCII.
- 'ExploreNet 3000' - a NN demonstrator
- 'Anza/DP Plus'- a Neural Net board with 25MFlop or 12.5M peak
- interconnects per second.
-
- 2. SAIC (Sience Application International Corporation)
- ++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
- 10260 Campus Point Drive
- MS 71, San Diego
- CA 92121
- (619) 546 6148
- Fax: (619) 546 6736
-
- 3. Micro Devices
- ++++++++++++++++
-
- 30 Skyline Drive
- Lake Mary
- FL 32746-6201
- (407) 333-4379
- MicroDevices makes MD1220 - 'Neural Bit Slice'
- Each of the products mentioned sofar have very different usages.
- Although this sounds similar to Intel's product, the
- architectures are not.
-
- 4. Intel Corp
- +++++++++++++
-
- 2250 Mission College Blvd
- Santa Clara, Ca 95052-8125
- Attn ETANN, Mail Stop SC9-40
- (408) 765-9235
- Intel is making an experimental chip:
- 80170NW - Electrically trainable Analog Neural Network (ETANN)
- It has 64 'neurons' on it - almost fully internally connectted
- and the chip can be put in an hierarchial architecture to do 2 Billion
- interconnects per second.
- Support software has already been made by
- California Scientific Software
- 10141 Evening Star Dr #6
- Grass Valley, CA 95945-9051
- (916) 477-7481
- Their product is called 'BrainMaker'.
-
- 5. NeuralWare, Inc
- ++++++++++++++++++
-
- Penn Center West
- Bldg IV Suite 227
- Pittsburgh
- PA 15276
- They only sell software/simulator but for many platforms.
-
- 6. Tubb Research Limited
- ++++++++++++++++++++++++
-
- 7a Lavant Street
- Peterfield
- Hampshire
- GU32 2EL
- United Kingdom
- Tel: +44 730 60256
-
- 7. Adaptive Solutions Inc
- +++++++++++++++++++++++++
-
- 1400 NW Compton Drive
- Suite 340
- Beaverton, OR 97006
- U. S. A.
- Tel: 503-690-1236; FAX: 503-690-1249
-
- 8. NeuroDynamX, Inc.
- ++++++++++++++++++++
-
- 4730 Walnut St., Suite 101B
- Boulder, CO 80301
- Voice: (303) 442-3539 Fax: (303) 442-2854
- Internet: techsupport@ndx.com
- NDX sells a number neural network hardware products:
- NDX Neural Accelerators: a line of i860-based accelerator cards for
- the PC that give up to 45 million connections per second for use
- with the DynaMind neural network software.
- iNNTS: Intel's 80170NX (ETANN) Neural Network Training System. NDX's president
- was one of the co-designers of this chip.
-
- 9. IC Tech
- ++++++++++
-
- NEURO-COMPUTING IC's:
- * DANN050L (dendro-dendritic artificial neural network)
- + 50 neurons fully connected at the input
- + on-chip digital learning capability
- + 6 billion connections/sec peak speed
- + learns 7 x 7 template in < 50 nsec., recalls in < 400 nsec.
- + low power < 100 milli Watts
- + 64-pin package
- * NCA717D (neuro correlator array)
- + analog template matching in < 500 nsec.
- + analog input / digital output pins for real-time computation
- + vision applications in stereo and motion computation
- + 40-pin package
- NEURO COMPUTING BOARD:
- * ICT1050
- + IBM PC compatible or higher
- + with on-board DANN050L
- + digital interface
- + custom configurations available
- Contact:
- IC Tech (Innovative Computing Technologies, Inc.)
- 4138 Luff Court
- Okemos, MI 48864
- (517) 349-4544
- ictech@mcimail.com
-
- And here is an incomplete overview over known Neural
- Computers with their newest known reference.
-
- \subsection*{Digital}
- \subsubsection{Special Computers}
-
- {\bf AAP-2}
- Takumi Watanabe, Yoshi Sugiyama, Toshio Kondo, and Yoshihiro Kitamura.
- Neural network simulation on a massively parallel cellular array
- processor: AAP-2.
- In International Joint Conference on Neural Networks, 1989.
-
- {\bf ANNA}
- B.E.Boser, E.Sackinger, J.Bromley, Y.leChun, and L.D.Jackel.\\
- Hardware Requirements for Neural Network Pattern Classifiers.\\
- In {\it IEEE Micro}, 12(1), pages 32-40, February 1992.
-
- {\bf Analog Neural Computer}
- Paul Mueller et al.
- Design and performance of a prototype analog neural computer.
- In Neurocomputing, 4(6):311-323, 1992.
-
- {\bf APx -- Array Processor Accelerator}\\
- F.Pazienti.\\
- Neural networks simulation with array processors.
- In {\it Advanced Computer Technology, Reliable Systems and Applications;
- Proceedings of the 5th Annual Computer Conference}, pages 547-551.
- IEEE Comput. Soc. Press, May 1991. ISBN: 0-8186-2141-9.
-
- {\bf ASP -- Associative String Processor}\\
- A.Krikelis.\\
- A novel massively associative processing architecture for the
- implementation artificial neural networks.\\
- In {\it 1991 International Conference on Acoustics, Speech and
- Signal Processing}, volume 2, pages 1057-1060. IEEE Comput. Soc. Press,
- May 1991.
-
- {\bf BSP400}
- Jan N.H. Heemskerk, Jacob M.J. Murre, Jaap Hoekstra, Leon H.J.G.
- Kemna, and Patrick T.W. Hudson.
- The bsp400: A modular neurocomputer assembled from 400 low-cost
- microprocessors.
- In International Conference on Artificial Neural Networks. Elsevier
- Science, 1991.
-
- {\bf BLAST}\\
- J.G.Elias, M.D.Fisher, and C.M.Monemi.\\
- A multiprocessor machine for large-scale neural network simulation.
- In {\it IJCNN91-Seattle: International Joint Conference on Neural
- Networks}, volume 1, pages 469-474. IEEE Comput. Soc. Press, July 1991.
- ISBN: 0-7883-0164-1.
-
- {\bf CNAPS Neurocomputer}\\
- H.McCartor\\
- Back Propagation Implementation on the Adaptive Solutions CNAPS
- Neurocomputer.\\
- In {\it Advances in Neural Information Processing Systems}, 3, 1991.
-
- {\bf GENES~IV and MANTRA~I}\\
- Paolo Ienne and Marc A. Viredaz\\
- {GENES~IV}: A Bit-Serial Processing Element for a Multi-Model
- Neural-Network Accelerator\\
- Proceedings of the International Conference on Application Specific Array
- Processors, Venezia, 1993.
-
- {\bf MA16 -- Neural Signal Processor}
- U.Ramacher, J.Beichter, and N.Bruls.\\
- Architecture of a general-purpose neural signal processor.\\
- In {\it IJCNN91-Seattle: International Joint Conference on Neural
- Networks}, volume 1, pages 443-446. IEEE Comput. Soc. Press, July 1991.
- ISBN: 0-7083-0164-1.
-
- {\bf MANTRA I}\\
- Marc A. Viredaz\\
- {MANTRA~I}: An {SIMD} Processor Array for Neural Computation
- Proceedings of the Euro-ARCH'93 Conference, {M\"unchen}, 1993.
-
- {\bf Mindshape}
- Jan N.H. Heemskerk, Jacob M.J. Murre Arend Melissant, Mirko Pelgrom,
- and Patrick T.W. Hudson.
- Mindshape: a neurocomputer concept based on a fractal architecture.
- In International Conference on Artificial Neural Networks. Elsevier
- Science, 1992.
-
- {\bf mod 2}
- Michael L. Mumford, David K. Andes, and Lynn R. Kern.
- The mod 2 neurocomputer system design.
- In IEEE Transactions on Neural Networks, 3(3):423-433, 1992.
-
- {\bf NERV}\\
- R.Hauser, H.Horner, R. Maenner, and M.Makhaniok.\\
- Architectural Considerations for NERV - a General Purpose Neural
- Network Simulation System.\\
- In {\it Workshop on Parallel Processing: Logic, Organization and
- Technology -- WOPPLOT 89}, pages 183-195. Springer Verlag, Mars 1989.
- ISBN: 3-5405-5027-5.
-
- {\bf NP -- Neural Processor}\\
- D.A.Orrey, D.J.Myers, and J.M.Vincent.\\
- A high performance digital processor for implementing large artificial
- neural networks.\\
- In {\it Proceedings of of the IEEE 1991 Custom Integrated Circuits
- Conference}, pages 16.3/1-4. IEEE Comput. Soc. Press, May 1991.
- ISBN: 0-7883-0015-7.
-
- {\bf RAP -- Ring Array Processor }\\
- N.Morgan, J.Beck, P.Kohn, J.Bilmes, E.Allman, and J.Beer.\\
- The ring array processor: A multiprocessing peripheral for connectionist
- applications. \\
- In {\it Journal of Parallel and Distributed Computing}, pages
- 248-259, April 1992.
-
- {\bf RENNS -- REconfigurable Neural Networks Server}\\
- O.Landsverk, J.Greipsland, J.A.Mathisen, J.G.Solheim, and L.Utne.\\
- RENNS - a Reconfigurable Computer System for Simulating Artificial
- Neural Network Algorithms.\\
- In {\it Parallel and Distributed Computing Systems, Proceedings of the
- ISMM 5th International Conference}, pages 251-256. The International
- Society for Mini and Microcomputers - ISMM, October 1992.
- ISBN: 1-8808-4302-1.
-
- {\bf SMART -- Sparse Matrix Adaptive and Recursive Transforms}\\
- P.Bessiere, A.Chams, A.Guerin, J.Herault, C.Jutten, and J.C.Lawson.\\
- From Hardware to Software: Designing a ``Neurostation''.\\
- In {\it VLSI design of Neural Networks}, pages 311-335, June 1990.
-
- {\bf SNAP -- Scalable Neurocomputer Array Processor}
- E.Wojciechowski.\\
- SNAP: A parallel processor for implementing real time neural networks.\\
- In {\it Proceedings of the IEEE 1991 National Aerospace and Electronics
- Conference; NAECON-91}, volume 2, pages 736-742. IEEE Comput.Soc.Press,
- May 1991.
-
- {\bf Toroidal Neural Network Processor}\\
- S.Jones, K.Sammut, C.Nielsen, and J.Staunstrup.\\
- Toroidal Neural Network: Architecture and Processor Granularity
- Issues.\\
- In {\it VLSI design of Neural Networks}, pages 229-254, June 1990.
-
- {\bf SMART and SuperNode}
- P. Bessi`ere, A. Chams, and P. Chol.
- MENTAL : A virtual machine approach to artificial neural networks
- programming. In NERVES, ESPRIT B.R.A. project no 3049, 1991.
-
-
- \subsubsection{Standard Computers}
-
- {\bf EMMA-2}\\
- R.Battiti, L.M.Briano, R.Cecinati, A.M.Colla, and P.Guido.\\
- An application oriented development environment for Neural Net models on
- multiprocessor Emma-2.\\
- In {\it Silicon Architectures for Neural Nets; Proceedings for the IFIP
- WG.10.5 Workshop}, pages 31-43. North Holland, November 1991.
- ISBN: 0-4448-9113-7.
-
- {\bf iPSC/860 Hypercube}\\
- D.Jackson, and D.Hammerstrom\\
- Distributing Back Propagation Networks Over the Intel iPSC/860
- Hypercube}\\
- In {\it IJCNN91-Seattle: International Joint Conference on Neural
- Networks}, volume 1, pages 569-574. IEEE Comput. Soc. Press, July 1991.
- ISBN: 0-7083-0164-1.
-
- {\bf SCAP -- Systolic/Cellular Array Processor}\\
- Wei-Ling L., V.K.Prasanna, and K.W.Przytula.\\
- Algorithmic Mapping of Neural Network Models onto Parallel SIMD
- Machines.\\
- In {\it IEEE Transactions on Computers}, 40(12), pages 1390-1401,
- December 1991. ISSN: 0018-9340.
-
- ------------------------------------------------------------------------
-
- 20. A: Databases for experimentation with NNs?
- ==============================================
-
- 1. The neural-bench Benchmark collection
- ++++++++++++++++++++++++++++++++++++++++
-
- Accessible via anonymous FTP on ftp.cs.cmu.edu
- [128.2.206.173] in directory /afs/cs/project/connect/bench. In
- case of problems or if you want to donate data, email
- contact is "neural-bench@cs.cmu.edu". The data sets in
- this repository include the 'nettalk' data, 'two spirals',
- protein structure prediction, vowel recognition, sonar signal
- classification, and a few others.
-
- 2. Proben1
- ++++++++++
-
- Proben1 is a collection of 12 learning problems consisting
- of real data. The datafiles all share a single simple common
- format. Along with the data comes a technical report
- describing a set of rules and conventions for performing
- and reporting benchmark tests and their results. Accessible
- via anonymous FTP on ftp.cs.cmu.edu [128.2.206.173] as
- /afs/cs/project/connect/bench/contrib/prechelt/proben1.tar.gz.
- and also on ftp.ira.uka.de [129.13.10.90] as
- /pub/neuron/proben.tar.gz. The file is about 1.8 MB and
- unpacks into about 20 MB.
-
- 3. UCI machine learning database
- ++++++++++++++++++++++++++++++++
-
- Accessible via anonymous FTP on ics.uci.edu [128.195.1.1]
- in directory /pub/machine-learning-databases".
-
- 4. NIST special databases of the National Institute Of
- ++++++++++++++++++++++++++++++++++++++++++++++++++++++
- Standards And Technology:
- +++++++++++++++++++++++++
-
- Several large databases, each delivered on a CD-ROM.
- Here is a quick list.
- o NIST Binary Images of Printed Digits, Alphas, and
- Text
- o NIST Structured Forms Reference Set of Binary
- Images
- o NIST Binary Images of Handwritten Segmented
- Characters
- o NIST 8-bit Gray Scale Images of Fingerprint Image
- Groups
- o NIST Structured Forms Reference Set 2 of Binary
- Images
- o NIST Test Data 1: Binary Images of Hand-Printed
- Segmented Characters
- o NIST Machine-Print Database of Gray Scale and
- Binary Images
- o NIST 8-Bit Gray Scale Images of Mated
- Fingerprint Card Pairs
- o NIST Supplemental Fingerprint Card Data (SFCD)
- for NIST Special Database 9
- o NIST Binary Image Databases of Census Miniforms
- (MFDB)
- o NIST Mated Fingerprint Card Pairs 2 (MFCP 2)
- o NIST Scoring Package Release 1.0
- o NIST FORM-BASED HANDPRINT
- RECOGNITION SYSTEM
- Here are example descriptions of two of these databases:
-
- NIST special database 2: Structured Forms Reference Set
- -------------------------------------------------------
- (SFRS)
- ------
-
- The NIST database of structured forms contains 5,590 full
- page images of simulated tax forms completed using
- machine print. THERE IS NO REAL TAX DATA IN
- THIS DATABASE. The structured forms used in this
- database are 12 different forms from the 1988, IRS 1040
- Package X. These include Forms 1040, 2106, 2441, 4562,
- and 6251 together with Schedules A, B, C, D, E, F and SE.
- Eight of these forms contain two pages or form faces
- making a total of 20 form faces represented in the database.
- Each image is stored in bi-level black and white raster
- format. The images in this database appear to be real forms
- prepared by individuals but the images have been
- automatically derived and synthesized using a computer
- and contain no "real" tax data. The entry field values on
- the forms have been automatically generated by a
- computer in order to make the data available without the
- danger of distributing privileged tax information. In
- addition to the images the database includes 5,590 answer
- files, one for each image. Each answer file contains an
- ASCII representation of the data found in the entry fields
- on the corresponding image. Image format documentation
- and example software are also provided. The uncompressed
- database totals approximately 5.9 gigabytes of data.
-
- NIST special database 3: Binary Images of Handwritten
- -----------------------------------------------------
- Segmented Characters (HWSC)
- ---------------------------
-
- Contains 313,389 isolated character images segmented
- from the 2,100 full-page images distributed with "NIST
- Special Database 1". 223,125 digits, 44,951 upper-case, and
- 45,313 lower-case character images. Each character image
- has been centered in a separate 128 by 128 pixel region,
- error rate of the segmentation and assigned classification is
- less than 0.1%. The uncompressed database totals
- approximately 2.75 gigabytes of image data and includes
- image format documentation and example software.
-
- The system requirements for all databases are a 5.25"
- CD-ROM drive with software to read ISO-9660 format.
- Contact: Darrin L. Dimmick; dld@magi.ncsl.nist.gov;
- (301)975-4147
-
- The prices of the databases are between US$ 250 and 1895
- If you wish to order a database, please contact: Standard
- Reference Data; National Institute of Standards and
- Technology; 221/A323; Gaithersburg, MD 20899; Phone:
- (301)975-2208; FAX: (301)926-0416
-
- Samples of the data can be found by ftp on
- sequoyah.ncsl.nist.gov in directory /pub/data A more
- complete description of the available databases can be
- obtained from the same host as /pub/databases/catalog.txt
-
- 5. CEDAR CD-ROM 1: Database of Handwritten Cities,
- ++++++++++++++++++++++++++++++++++++++++++++++++++
- States, ZIP Codes, Digits, and Alphabetic Characters
- ++++++++++++++++++++++++++++++++++++++++++++++++++++
-
- The Center Of Excellence for Document Analysis and
- Recognition (CEDAR) State University of New York at
- Buffalo announces the availability of CEDAR CDROM 1:
- USPS Office of Advanced Technology The database
- contains handwritten words and ZIP Codes in high
- resolution grayscale (300 ppi 8-bit) as well as binary
- handwritten digits and alphabetic characters (300 ppi
- 1-bit). This database is intended to encourage research in
- off-line handwriting recognition by providing access to
- handwriting samples digitized from envelopes in a working
- post office.
-
- Specifications of the database include:
- + 300 ppi 8-bit grayscale handwritten words (cities,
- states, ZIP Codes)
- o 5632 city words
- o 4938 state words
- o 9454 ZIP Codes
- + 300 ppi binary handwritten characters and digits:
- o 27,837 mixed alphas and numerics segmented
- from address blocks
- o 21,179 digits segmented from ZIP Codes
- + every image supplied with a manually determined
- truth value
- + extracted from live mail in a working U.S. Post
- Office
- + word images in the test set supplied with dic-
- tionaries of postal words that simulate partial
- recognition of the corresponding ZIP Code.
- + digit images included in test set that simulate
- automatic ZIP Code segmentation. Results on these
- data can be projected to overall ZIP Code recogni-
- tion performance.
- + image format documentation and software included
-
- System requirements are a 5.25" CD-ROM drive with
- software to read ISO-9660 format. For any further
- information, including how to order the database, please
- contact: Jonathan J. Hull, Associate Director, CEDAR, 226
- Bell Hall State University of New York at Buffalo,
- Buffalo, NY 14260; hull@cs.buffalo.edu (email)
-
- 6. AI-CD-ROM (see under answer 13)
- ++++++++++++++++++++++++++++++++++
-
- 7. Time series archive
- ++++++++++++++++++++++
-
- Various datasets of time series (to be used for prediction
- learning problems) are available for anonymous ftp from
- ftp.santafe.edu [192.12.12.1] in /pub/Time-Series".
- Problems are for example: fluctuations in a far-infrared
- laser; Physiological data of patients with sleep apnea; High
- frequency currency exchange rate data; Intensity of a white
- dwarf star; J.S. Bachs final (unfinished) fugue from "Die
- Kunst der Fuge"
-
- Some of the datasets were used in a prediction contest and
- are described in detail in the book "Time series prediction:
- Forecasting the future and understanding the past", edited
- by Weigend/Gershenfield, Proceedings Volume XV in the
- Santa Fe Institute Studies in the Sciences of Complexity
- series of Addison Wesley (1994).
-
- ------------------------------------------------------------------------
-
- That's all folks.
-
- Acknowledgements: Thanks to all the people who helped to get the stuff
- above into the posting. I cannot name them all, because
- I would make far too many errors then. :->
-
- No? Not good? You want individual credit?
- OK, OK. I'll try to name them all. But: no guarantee....
-
- THANKS FOR HELP TO:
- (in alphabetical order of email adresses, I hope)
-
- o Steve Ward <71561.2370@CompuServe.COM>
- o Allen Bonde <ab04@harvey.gte.com>
- o Accel Infotech Spore Pte Ltd <accel@solomon.technet.sg>
- o Ales Krajnc <akrajnc@fagg.uni-lj.si>
- o Alexander Linden <al@jargon.gmd.de>
- o Matthew David Aldous <aldous@mundil.cs.mu.OZ.AU>
- o S.Taimi Ames <ames@reed.edu>
- o Axel Mulder <amulder@move.kines.sfu.ca>
- o anderson@atc.boeing.com
- o Andy Gillanders <andy@grace.demon.co.uk>
- o Davide Anguita <anguita@ICSI.Berkeley.EDU>
- o Avraam Pouliakis <apou@leon.nrcps.ariadne-t.gr>
- o Kim L. Blackwell <avrama@helix.nih.gov>
- o Mohammad Bahrami <bahrami@cse.unsw.edu.au>
- o Paul Bakker <bakker@cs.uq.oz.au>
- o Stefan Bergdoll <bergdoll@zxd.basf-ag.de>
- o Jamshed Bharucha <bharucha@casbs.Stanford.EDU>
- o Yijun Cai <caiy@mercury.cs.uregina.ca>
- o L. Leon Campbell <campbell@brahms.udel.edu>
- o Craig Watson <craig@magi.ncsl.nist.gov>
- o Yaron Danon <danony@goya.its.rpi.edu>
- o David Ewing <dave@ndx.com>
- o David DeMers <demers@cs.ucsd.edu>
- o Denni Rognvaldsson <denni@thep.lu.se>
- o Duane Highley <dhighley@ozarks.sgcl.lib.mo.us>
- o Dick.Keene@Central.Sun.COM
- o DJ Meyer <djm@partek.com>
- o Donald Tveter <drt@mcs.com>
- o Athanasios Episcopos
- <EPISCOPO@icarus.som.clarkson.edu>
- o Frank Schnorrenberg <fs0997@easttexas.tamu.edu>
- o Gary Lawrence Murphy <garym@maya.isis.org>
- o gaudiano@park.bu.edu
- o Lee Giles <giles@research.nj.nec.com>
- o Glen Clark <opto!glen@gatech.edu>
- o Phil Goodman <goodman@unr.edu>
- o guy@minster.york.ac.uk
- o Joerg Heitkoetter
- <heitkoet@lusty.informatik.uni-dortmund.de>
- o Ralf Hohenstein <hohenst@math.uni-muenster.de>
- o Gamze Erten <ictech@mcimail.com>
- o Ed Rosenfeld <IER@aol.com>
- o Jean-Denis Muller <jdmuller@vnet.ibm.com>
- o Jeff Harpster <uu0979!jeff@uu9.psi.com>
- o Jonathan Kamens <jik@MIT.Edu>
- o J.J. Merelo <jmerelo@kal-el.ugr.es>
- o Jon Gunnar Solheim <jon@kongle.idt.unit.no>
- o Josef Nelissen <jonas@beor.informatik.rwth-aachen.de>
- o Joey Rogers <jrogers@buster.eng.ua.edu>
- o Subhash Kak <kak@gate.ee.lsu.edu>
- o Ken Karnofsky <karnofsky@mathworks.com>
- o Kjetil.Noervaag@idt.unit.no
- o Luke Koops <koops@gaul.csd.uwo.ca>
- o William Mackeown <mackeown@compsci.bristol.ac.uk>
- o Mark Plumbley <mark@dcs.kcl.ac.uk>
- o Peter Marvit <marvit@cattell.psych.upenn.edu>
- o masud@worldbank.org
- o Yoshiro Miyata <miyata@sccs.chukyo-u.ac.jp>
- o Madhav Moganti <mmogati@cs.umr.edu>
- o Jyrki Alakuijala <more@ee.oulu.fi>
- o Michael Reiss <m.reiss@kcl.ac.uk>
- o mrs@kithrup.com
- o Maciek Sitnik <msitnik@plearn.edu.pl>
- o R. Steven Rainwater <ncc@ncc.jvnc.net>
- o Paolo Ienne <Paolo.Ienne@di.epfl.ch>
- o Paul Keller <pe_keller@ccmail.pnl.gov>
- o Michael Plonski <plonski@aero.org>
- o Lutz Prechelt <prechelt@ira.uka.de> [creator of FAQ]
- o Richard Andrew Miles Outerbridge
- <ramo@uvphys.phys.uvic.ca>
- o Robin L. Getz <rgetz@esd.nsc.com>
- o Richard Cornelius <richc@rsf.atd.ucar.edu>
- o Rob Cunningham <rkc@xn.ll.mit.edu>
- o Robert.Kocjancic@IJS.si
- o Osamu Saito <saito@nttica.ntt.jp>
- o Warren Sarle <saswss@unx.sas.com>
- o Scott Fahlman <sef+@cs.cmu.edu>
- o <seibert@ll.mit.edu>
- o Sheryl Cormicle <sherylc@umich.edu>
- o Ted Stockwell <ted@aps1.spa.umn.edu>
- o Serge Waterschoot <swater@minf.vub.ac.be>
- o Thomas G. Dietterich <tgd@research.cs.orst.edu>
- o Thomas.Vogel@cl.cam.ac.uk
- o Ulrich Wendl <uli@unido.informatik.uni-dortmund.de>
- o M. Verleysen <verleysen@dice.ucl.ac.be>
- o Sherif Hashem <vg197@neutrino.pnl.gov>
- o Matthew P Wiener <weemba@sagi.wistar.upenn.edu>
- o Wesley Elsberry <welsberr@orca.tamu.edu>
-
- Bye
-
- Lutz
-
- Neural network FAQ / Lutz Prechelt, prechelt@ira.uka.de
- --
- Lutz Prechelt (http://wwwipd.ira.uka.de/~prechelt/) | Whenever you
- Institut fuer Programmstrukturen und Datenorganisation | complicate things,
- Universitaet Karlsruhe; 76128 Karlsruhe; Germany | they get
- (Voice: +49/721/608-4068, FAX: +49/721/694092) | less simple.
-